00:00:00.001 Started by upstream project "autotest-per-patch" build number 122889 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.127 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.165 Using shallow fetch with depth 1 00:00:00.165 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.165 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.196 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.196 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.239 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.252 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.267 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:10.267 > git config core.sparsecheckout # timeout=10 00:00:10.279 > git read-tree -mu HEAD # timeout=10 00:00:10.298 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:10.321 Commit message: "inventory/dev: add missing long names" 00:00:10.321 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:10.410 [Pipeline] Start of Pipeline 00:00:10.425 [Pipeline] library 00:00:10.427 Loading library shm_lib@master 00:00:10.428 Library shm_lib@master is cached. Copying from home. 00:00:10.445 [Pipeline] node 00:00:10.459 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:10.461 [Pipeline] { 00:00:10.494 [Pipeline] catchError 00:00:10.509 [Pipeline] { 00:00:10.533 [Pipeline] wrap 00:00:10.541 [Pipeline] { 00:00:10.547 [Pipeline] stage 00:00:10.548 [Pipeline] { (Prologue) 00:00:10.731 [Pipeline] sh 00:00:11.012 + logger -p user.info -t JENKINS-CI 00:00:11.032 [Pipeline] echo 00:00:11.033 Node: WFP22 00:00:11.041 [Pipeline] sh 00:00:11.338 [Pipeline] setCustomBuildProperty 00:00:11.349 [Pipeline] echo 00:00:11.350 Cleanup processes 00:00:11.355 [Pipeline] sh 00:00:11.645 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.645 1822439 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.668 [Pipeline] sh 00:00:11.955 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.955 ++ grep -v 'sudo pgrep' 00:00:11.955 ++ awk '{print $1}' 00:00:11.955 + sudo kill -9 00:00:11.955 + true 00:00:11.970 [Pipeline] cleanWs 00:00:11.980 [WS-CLEANUP] Deleting project workspace... 00:00:11.980 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.987 [WS-CLEANUP] done 00:00:11.992 [Pipeline] setCustomBuildProperty 00:00:12.007 [Pipeline] sh 00:00:12.289 + sudo git config --global --replace-all safe.directory '*' 00:00:12.360 [Pipeline] nodesByLabel 00:00:12.361 Found a total of 1 nodes with the 'sorcerer' label 00:00:12.373 [Pipeline] httpRequest 00:00:12.378 HttpMethod: GET 00:00:12.378 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:12.419 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:12.419 Response Code: HTTP/1.1 200 OK 00:00:12.420 Success: Status code 200 is in the accepted range: 200,404 00:00:12.420 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:16.676 [Pipeline] sh 00:00:16.961 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:16.982 [Pipeline] httpRequest 00:00:16.987 HttpMethod: GET 00:00:16.987 URL: http://10.211.164.101/packages/spdk_62bc4f069f41d8a2292e9ce21f92cfbb075a44cf.tar.gz 00:00:16.988 Sending request to url: http://10.211.164.101/packages/spdk_62bc4f069f41d8a2292e9ce21f92cfbb075a44cf.tar.gz 00:00:17.004 Response Code: HTTP/1.1 200 OK 00:00:17.004 Success: Status code 200 is in the accepted range: 200,404 00:00:17.005 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_62bc4f069f41d8a2292e9ce21f92cfbb075a44cf.tar.gz 00:00:40.819 [Pipeline] sh 00:00:41.102 + tar --no-same-owner -xf spdk_62bc4f069f41d8a2292e9ce21f92cfbb075a44cf.tar.gz 00:00:43.647 [Pipeline] sh 00:00:43.928 + git -C spdk log --oneline -n5 00:00:43.928 62bc4f069 raid: fix race between process starting and removing a base bdev 00:00:43.928 0c05a3476 raid: don't remove an unconfigured base bdev 00:00:43.928 01f10b8a3 raid: fix race between starting rebuild and creating io channel 00:00:43.928 4506c0c36 test/common: Enable inherit_errexit 00:00:43.928 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:43.939 [Pipeline] } 00:00:43.957 [Pipeline] // stage 00:00:43.966 [Pipeline] stage 00:00:43.968 [Pipeline] { (Prepare) 00:00:43.987 [Pipeline] writeFile 00:00:44.006 [Pipeline] sh 00:00:44.287 + logger -p user.info -t JENKINS-CI 00:00:44.300 [Pipeline] sh 00:00:44.580 + logger -p user.info -t JENKINS-CI 00:00:44.591 [Pipeline] sh 00:00:44.871 + cat autorun-spdk.conf 00:00:44.871 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.871 SPDK_TEST_NVMF=1 00:00:44.871 SPDK_TEST_NVME_CLI=1 00:00:44.871 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:44.871 SPDK_TEST_NVMF_NICS=e810 00:00:44.871 SPDK_TEST_VFIOUSER=1 00:00:44.871 SPDK_RUN_UBSAN=1 00:00:44.871 NET_TYPE=phy 00:00:44.877 RUN_NIGHTLY=0 00:00:44.882 [Pipeline] readFile 00:00:44.904 [Pipeline] withEnv 00:00:44.906 [Pipeline] { 00:00:44.920 [Pipeline] sh 00:00:45.197 + set -ex 00:00:45.197 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:45.197 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.197 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.197 ++ SPDK_TEST_NVMF=1 00:00:45.197 ++ SPDK_TEST_NVME_CLI=1 00:00:45.197 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.197 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.197 ++ SPDK_TEST_VFIOUSER=1 00:00:45.197 ++ SPDK_RUN_UBSAN=1 00:00:45.197 ++ NET_TYPE=phy 00:00:45.197 ++ RUN_NIGHTLY=0 00:00:45.197 + case $SPDK_TEST_NVMF_NICS in 00:00:45.197 + DRIVERS=ice 00:00:45.197 + [[ tcp == \r\d\m\a ]] 00:00:45.197 + [[ -n ice ]] 00:00:45.197 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:45.197 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:45.197 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:45.197 rmmod: ERROR: Module irdma is not currently loaded 00:00:45.197 rmmod: ERROR: Module i40iw is not currently loaded 00:00:45.197 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:45.197 + true 00:00:45.197 + for D in $DRIVERS 00:00:45.197 + sudo modprobe ice 00:00:45.197 + exit 0 00:00:45.207 [Pipeline] } 00:00:45.225 [Pipeline] // withEnv 00:00:45.231 [Pipeline] } 00:00:45.247 [Pipeline] // stage 00:00:45.257 [Pipeline] catchError 00:00:45.259 [Pipeline] { 00:00:45.277 [Pipeline] timeout 00:00:45.277 Timeout set to expire in 40 min 00:00:45.279 [Pipeline] { 00:00:45.298 [Pipeline] stage 00:00:45.300 [Pipeline] { (Tests) 00:00:45.319 [Pipeline] sh 00:00:45.619 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.619 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.619 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.619 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:45.619 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.619 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.619 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:45.619 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.619 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.619 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.619 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.619 + source /etc/os-release 00:00:45.619 ++ NAME='Fedora Linux' 00:00:45.619 ++ VERSION='38 (Cloud Edition)' 00:00:45.619 ++ ID=fedora 00:00:45.619 ++ VERSION_ID=38 00:00:45.619 ++ VERSION_CODENAME= 00:00:45.619 ++ PLATFORM_ID=platform:f38 00:00:45.619 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.619 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.619 ++ LOGO=fedora-logo-icon 00:00:45.619 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.619 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.619 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.619 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.619 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.619 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.619 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.619 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.619 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.619 ++ SUPPORT_END=2024-05-14 00:00:45.619 ++ VARIANT='Cloud Edition' 00:00:45.619 ++ VARIANT_ID=cloud 00:00:45.620 + uname -a 00:00:45.620 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:45.620 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:48.926 Hugepages 00:00:48.926 node hugesize free / total 00:00:48.926 node0 1048576kB 0 / 0 00:00:48.926 node0 2048kB 0 / 0 00:00:48.926 node1 1048576kB 0 / 0 00:00:48.926 node1 2048kB 0 / 0 00:00:48.926 00:00:48.926 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.926 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:48.926 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:48.926 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:48.926 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:48.926 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:48.926 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:48.926 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:48.926 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:48.926 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:48.926 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:48.926 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:48.926 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:48.926 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:48.926 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:48.926 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:48.926 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:48.926 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:48.926 + rm -f /tmp/spdk-ld-path 00:00:48.926 + source autorun-spdk.conf 00:00:48.926 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.926 ++ SPDK_TEST_NVMF=1 00:00:48.926 ++ SPDK_TEST_NVME_CLI=1 00:00:48.926 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.926 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.926 ++ SPDK_TEST_VFIOUSER=1 00:00:48.926 ++ SPDK_RUN_UBSAN=1 00:00:48.926 ++ NET_TYPE=phy 00:00:48.926 ++ RUN_NIGHTLY=0 00:00:48.926 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.926 + [[ -n '' ]] 00:00:48.926 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.926 + for M in /var/spdk/build-*-manifest.txt 00:00:48.926 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.926 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.926 + for M in /var/spdk/build-*-manifest.txt 00:00:48.926 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.926 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.926 ++ uname 00:00:48.926 + [[ Linux == \L\i\n\u\x ]] 00:00:48.926 + sudo dmesg -T 00:00:48.926 + sudo dmesg --clear 00:00:48.926 + dmesg_pid=1823330 00:00:48.926 + [[ Fedora Linux == FreeBSD ]] 00:00:48.926 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.926 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.926 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.926 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.926 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.926 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.926 + sudo dmesg -Tw 00:00:48.926 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.926 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.926 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.926 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.926 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.926 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.926 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.926 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.926 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.926 Test configuration: 00:00:48.926 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.926 SPDK_TEST_NVMF=1 00:00:48.926 SPDK_TEST_NVME_CLI=1 00:00:48.926 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.926 SPDK_TEST_NVMF_NICS=e810 00:00:48.926 SPDK_TEST_VFIOUSER=1 00:00:48.926 SPDK_RUN_UBSAN=1 00:00:48.926 NET_TYPE=phy 00:00:48.926 RUN_NIGHTLY=0 12:02:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:48.926 12:02:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.927 12:02:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.927 12:02:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.927 12:02:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.927 12:02:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.927 12:02:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.927 12:02:17 -- paths/export.sh@5 -- $ export PATH 00:00:48.927 12:02:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.927 12:02:17 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:48.927 12:02:17 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:48.927 12:02:17 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715767337.XXXXXX 00:00:48.927 12:02:17 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715767337.fDrk65 00:00:48.927 12:02:17 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:48.927 12:02:17 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:48.927 12:02:17 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:48.927 12:02:17 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.927 12:02:17 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.927 12:02:17 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:48.927 12:02:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:48.927 12:02:17 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.927 12:02:17 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:48.927 12:02:17 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:48.927 12:02:17 -- pm/common@17 -- $ local monitor 00:00:48.927 12:02:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.927 12:02:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.927 12:02:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.927 12:02:17 -- pm/common@21 -- $ date +%s 00:00:48.927 12:02:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.927 12:02:17 -- pm/common@21 -- $ date +%s 00:00:48.927 12:02:17 -- pm/common@25 -- $ sleep 1 00:00:48.927 12:02:17 -- pm/common@21 -- $ date +%s 00:00:48.927 12:02:17 -- pm/common@21 -- $ date +%s 00:00:48.927 12:02:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715767337 00:00:48.927 12:02:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715767337 00:00:48.927 12:02:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715767337 00:00:48.927 12:02:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715767337 00:00:48.927 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715767337_collect-cpu-load.pm.log 00:00:48.927 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715767337_collect-vmstat.pm.log 00:00:48.927 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715767337_collect-cpu-temp.pm.log 00:00:48.927 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715767337_collect-bmc-pm.bmc.pm.log 00:00:49.862 12:02:18 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:49.862 12:02:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.862 12:02:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.862 12:02:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.862 12:02:18 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.862 Wed May 15 10:02:18 AM UTC 2024 00:00:49.862 12:02:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.862 v24.05-pre-661-g62bc4f069 00:00:49.862 12:02:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.862 12:02:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.862 12:02:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.862 12:02:18 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:49.862 12:02:18 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:49.862 12:02:18 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.862 ************************************ 00:00:49.862 START TEST ubsan 00:00:49.862 ************************************ 00:00:49.862 12:02:18 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:00:49.862 using ubsan 00:00:49.862 00:00:49.862 real 0m0.001s 00:00:49.862 user 0m0.000s 00:00:49.862 sys 0m0.000s 00:00:49.862 12:02:18 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:49.862 12:02:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:49.862 ************************************ 00:00:49.862 END TEST ubsan 00:00:49.862 ************************************ 00:00:50.122 12:02:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:50.122 12:02:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:50.122 12:02:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:50.122 12:02:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:50.122 12:02:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:50.122 12:02:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:50.122 12:02:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:50.122 12:02:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:50.122 12:02:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:50.122 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:50.122 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:50.689 Using 'verbs' RDMA provider 00:01:06.130 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:18.363 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:18.363 Creating mk/config.mk...done. 00:01:18.363 Creating mk/cc.flags.mk...done. 00:01:18.363 Type 'make' to build. 00:01:18.363 12:02:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:18.363 12:02:46 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:18.363 12:02:46 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:18.363 12:02:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.363 ************************************ 00:01:18.363 START TEST make 00:01:18.363 ************************************ 00:01:18.363 12:02:46 make -- common/autotest_common.sh@1122 -- $ make -j112 00:01:18.363 make[1]: Nothing to be done for 'all'. 00:01:19.742 The Meson build system 00:01:19.742 Version: 1.3.1 00:01:19.742 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:19.742 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:19.742 Build type: native build 00:01:19.742 Project name: libvfio-user 00:01:19.742 Project version: 0.0.1 00:01:19.742 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.742 C linker for the host machine: cc ld.bfd 2.39-16 00:01:19.742 Host machine cpu family: x86_64 00:01:19.742 Host machine cpu: x86_64 00:01:19.742 Run-time dependency threads found: YES 00:01:19.742 Library dl found: YES 00:01:19.742 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.742 Run-time dependency json-c found: YES 0.17 00:01:19.742 Run-time dependency cmocka found: YES 1.1.7 00:01:19.742 Program pytest-3 found: NO 00:01:19.742 Program flake8 found: NO 00:01:19.742 Program misspell-fixer found: NO 00:01:19.742 Program restructuredtext-lint found: NO 00:01:19.742 Program valgrind found: YES (/usr/bin/valgrind) 00:01:19.742 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.742 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.742 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.742 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:19.742 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:19.742 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:19.742 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:19.742 Build targets in project: 8 00:01:19.742 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:19.742 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:19.742 00:01:19.742 libvfio-user 0.0.1 00:01:19.742 00:01:19.742 User defined options 00:01:19.742 buildtype : debug 00:01:19.742 default_library: shared 00:01:19.742 libdir : /usr/local/lib 00:01:19.742 00:01:19.742 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:20.001 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:20.001 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:20.001 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:20.001 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:20.001 [4/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:20.001 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:20.001 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:20.001 [7/37] Compiling C object samples/null.p/null.c.o 00:01:20.001 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:20.001 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:20.001 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:20.001 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:20.001 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:20.001 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:20.001 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:20.001 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:20.001 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:20.001 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:20.001 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:20.001 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:20.001 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:20.001 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:20.001 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:20.001 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:20.001 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:20.001 [25/37] Compiling C object samples/server.p/server.c.o 00:01:20.001 [26/37] Compiling C object samples/client.p/client.c.o 00:01:20.001 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:20.259 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:20.260 [29/37] Linking target samples/client 00:01:20.260 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:20.260 [31/37] Linking target test/unit_tests 00:01:20.260 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:20.260 [33/37] Linking target samples/gpio-pci-idio-16 00:01:20.260 [34/37] Linking target samples/server 00:01:20.260 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:20.260 [36/37] Linking target samples/null 00:01:20.260 [37/37] Linking target samples/lspci 00:01:20.260 INFO: autodetecting backend as ninja 00:01:20.260 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:20.260 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:20.826 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:20.826 ninja: no work to do. 00:01:26.138 The Meson build system 00:01:26.138 Version: 1.3.1 00:01:26.138 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:26.138 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:26.138 Build type: native build 00:01:26.138 Program cat found: YES (/usr/bin/cat) 00:01:26.138 Project name: DPDK 00:01:26.138 Project version: 23.11.0 00:01:26.138 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:26.138 C linker for the host machine: cc ld.bfd 2.39-16 00:01:26.138 Host machine cpu family: x86_64 00:01:26.138 Host machine cpu: x86_64 00:01:26.138 Message: ## Building in Developer Mode ## 00:01:26.138 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:26.138 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:26.138 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:26.138 Program python3 found: YES (/usr/bin/python3) 00:01:26.138 Program cat found: YES (/usr/bin/cat) 00:01:26.138 Compiler for C supports arguments -march=native: YES 00:01:26.138 Checking for size of "void *" : 8 00:01:26.138 Checking for size of "void *" : 8 (cached) 00:01:26.138 Library m found: YES 00:01:26.138 Library numa found: YES 00:01:26.138 Has header "numaif.h" : YES 00:01:26.138 Library fdt found: NO 00:01:26.138 Library execinfo found: NO 00:01:26.138 Has header "execinfo.h" : YES 00:01:26.138 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:26.138 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:26.138 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:26.138 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:26.138 Run-time dependency openssl found: YES 3.0.9 00:01:26.138 Run-time dependency libpcap found: YES 1.10.4 00:01:26.138 Has header "pcap.h" with dependency libpcap: YES 00:01:26.138 Compiler for C supports arguments -Wcast-qual: YES 00:01:26.138 Compiler for C supports arguments -Wdeprecated: YES 00:01:26.138 Compiler for C supports arguments -Wformat: YES 00:01:26.138 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:26.138 Compiler for C supports arguments -Wformat-security: NO 00:01:26.138 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.138 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:26.138 Compiler for C supports arguments -Wnested-externs: YES 00:01:26.138 Compiler for C supports arguments -Wold-style-definition: YES 00:01:26.138 Compiler for C supports arguments -Wpointer-arith: YES 00:01:26.138 Compiler for C supports arguments -Wsign-compare: YES 00:01:26.138 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:26.138 Compiler for C supports arguments -Wundef: YES 00:01:26.138 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.138 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:26.138 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:26.138 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.138 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:26.138 Program objdump found: YES (/usr/bin/objdump) 00:01:26.138 Compiler for C supports arguments -mavx512f: YES 00:01:26.138 Checking if "AVX512 checking" compiles: YES 00:01:26.138 Fetching value of define "__SSE4_2__" : 1 00:01:26.138 Fetching value of define "__AES__" : 1 00:01:26.138 Fetching value of define "__AVX__" : 1 00:01:26.138 Fetching value of define "__AVX2__" : 1 00:01:26.138 Fetching value of define "__AVX512BW__" : 1 00:01:26.138 Fetching value of define "__AVX512CD__" : 1 00:01:26.138 Fetching value of define "__AVX512DQ__" : 1 00:01:26.138 Fetching value of define "__AVX512F__" : 1 00:01:26.138 Fetching value of define "__AVX512VL__" : 1 00:01:26.138 Fetching value of define "__PCLMUL__" : 1 00:01:26.138 Fetching value of define "__RDRND__" : 1 00:01:26.138 Fetching value of define "__RDSEED__" : 1 00:01:26.138 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:26.138 Fetching value of define "__znver1__" : (undefined) 00:01:26.138 Fetching value of define "__znver2__" : (undefined) 00:01:26.138 Fetching value of define "__znver3__" : (undefined) 00:01:26.138 Fetching value of define "__znver4__" : (undefined) 00:01:26.138 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:26.138 Message: lib/log: Defining dependency "log" 00:01:26.138 Message: lib/kvargs: Defining dependency "kvargs" 00:01:26.138 Message: lib/telemetry: Defining dependency "telemetry" 00:01:26.138 Checking for function "getentropy" : NO 00:01:26.138 Message: lib/eal: Defining dependency "eal" 00:01:26.138 Message: lib/ring: Defining dependency "ring" 00:01:26.138 Message: lib/rcu: Defining dependency "rcu" 00:01:26.138 Message: lib/mempool: Defining dependency "mempool" 00:01:26.138 Message: lib/mbuf: Defining dependency "mbuf" 00:01:26.138 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:26.138 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:26.138 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:26.138 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:26.138 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:26.138 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:26.138 Compiler for C supports arguments -mpclmul: YES 00:01:26.138 Compiler for C supports arguments -maes: YES 00:01:26.138 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:26.138 Compiler for C supports arguments -mavx512bw: YES 00:01:26.138 Compiler for C supports arguments -mavx512dq: YES 00:01:26.138 Compiler for C supports arguments -mavx512vl: YES 00:01:26.138 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:26.138 Compiler for C supports arguments -mavx2: YES 00:01:26.138 Compiler for C supports arguments -mavx: YES 00:01:26.138 Message: lib/net: Defining dependency "net" 00:01:26.138 Message: lib/meter: Defining dependency "meter" 00:01:26.138 Message: lib/ethdev: Defining dependency "ethdev" 00:01:26.138 Message: lib/pci: Defining dependency "pci" 00:01:26.138 Message: lib/cmdline: Defining dependency "cmdline" 00:01:26.138 Message: lib/hash: Defining dependency "hash" 00:01:26.138 Message: lib/timer: Defining dependency "timer" 00:01:26.138 Message: lib/compressdev: Defining dependency "compressdev" 00:01:26.138 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:26.138 Message: lib/dmadev: Defining dependency "dmadev" 00:01:26.138 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:26.138 Message: lib/power: Defining dependency "power" 00:01:26.138 Message: lib/reorder: Defining dependency "reorder" 00:01:26.138 Message: lib/security: Defining dependency "security" 00:01:26.138 Has header "linux/userfaultfd.h" : YES 00:01:26.138 Has header "linux/vduse.h" : YES 00:01:26.138 Message: lib/vhost: Defining dependency "vhost" 00:01:26.138 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:26.138 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:26.138 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:26.138 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:26.138 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:26.138 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:26.138 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:26.138 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:26.138 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:26.138 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:26.138 Program doxygen found: YES (/usr/bin/doxygen) 00:01:26.138 Configuring doxy-api-html.conf using configuration 00:01:26.138 Configuring doxy-api-man.conf using configuration 00:01:26.138 Program mandb found: YES (/usr/bin/mandb) 00:01:26.138 Program sphinx-build found: NO 00:01:26.138 Configuring rte_build_config.h using configuration 00:01:26.138 Message: 00:01:26.138 ================= 00:01:26.138 Applications Enabled 00:01:26.138 ================= 00:01:26.138 00:01:26.138 apps: 00:01:26.138 00:01:26.138 00:01:26.138 Message: 00:01:26.138 ================= 00:01:26.138 Libraries Enabled 00:01:26.138 ================= 00:01:26.138 00:01:26.138 libs: 00:01:26.138 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:26.138 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:26.138 cryptodev, dmadev, power, reorder, security, vhost, 00:01:26.138 00:01:26.138 Message: 00:01:26.138 =============== 00:01:26.138 Drivers Enabled 00:01:26.138 =============== 00:01:26.138 00:01:26.138 common: 00:01:26.138 00:01:26.138 bus: 00:01:26.138 pci, vdev, 00:01:26.138 mempool: 00:01:26.138 ring, 00:01:26.138 dma: 00:01:26.138 00:01:26.138 net: 00:01:26.138 00:01:26.138 crypto: 00:01:26.138 00:01:26.138 compress: 00:01:26.138 00:01:26.138 vdpa: 00:01:26.139 00:01:26.139 00:01:26.139 Message: 00:01:26.139 ================= 00:01:26.139 Content Skipped 00:01:26.139 ================= 00:01:26.139 00:01:26.139 apps: 00:01:26.139 dumpcap: explicitly disabled via build config 00:01:26.139 graph: explicitly disabled via build config 00:01:26.139 pdump: explicitly disabled via build config 00:01:26.139 proc-info: explicitly disabled via build config 00:01:26.139 test-acl: explicitly disabled via build config 00:01:26.139 test-bbdev: explicitly disabled via build config 00:01:26.139 test-cmdline: explicitly disabled via build config 00:01:26.139 test-compress-perf: explicitly disabled via build config 00:01:26.139 test-crypto-perf: explicitly disabled via build config 00:01:26.139 test-dma-perf: explicitly disabled via build config 00:01:26.139 test-eventdev: explicitly disabled via build config 00:01:26.139 test-fib: explicitly disabled via build config 00:01:26.139 test-flow-perf: explicitly disabled via build config 00:01:26.139 test-gpudev: explicitly disabled via build config 00:01:26.139 test-mldev: explicitly disabled via build config 00:01:26.139 test-pipeline: explicitly disabled via build config 00:01:26.139 test-pmd: explicitly disabled via build config 00:01:26.139 test-regex: explicitly disabled via build config 00:01:26.139 test-sad: explicitly disabled via build config 00:01:26.139 test-security-perf: explicitly disabled via build config 00:01:26.139 00:01:26.139 libs: 00:01:26.139 metrics: explicitly disabled via build config 00:01:26.139 acl: explicitly disabled via build config 00:01:26.139 bbdev: explicitly disabled via build config 00:01:26.139 bitratestats: explicitly disabled via build config 00:01:26.139 bpf: explicitly disabled via build config 00:01:26.139 cfgfile: explicitly disabled via build config 00:01:26.139 distributor: explicitly disabled via build config 00:01:26.139 efd: explicitly disabled via build config 00:01:26.139 eventdev: explicitly disabled via build config 00:01:26.139 dispatcher: explicitly disabled via build config 00:01:26.139 gpudev: explicitly disabled via build config 00:01:26.139 gro: explicitly disabled via build config 00:01:26.139 gso: explicitly disabled via build config 00:01:26.139 ip_frag: explicitly disabled via build config 00:01:26.139 jobstats: explicitly disabled via build config 00:01:26.139 latencystats: explicitly disabled via build config 00:01:26.139 lpm: explicitly disabled via build config 00:01:26.139 member: explicitly disabled via build config 00:01:26.139 pcapng: explicitly disabled via build config 00:01:26.139 rawdev: explicitly disabled via build config 00:01:26.139 regexdev: explicitly disabled via build config 00:01:26.139 mldev: explicitly disabled via build config 00:01:26.139 rib: explicitly disabled via build config 00:01:26.139 sched: explicitly disabled via build config 00:01:26.139 stack: explicitly disabled via build config 00:01:26.139 ipsec: explicitly disabled via build config 00:01:26.139 pdcp: explicitly disabled via build config 00:01:26.139 fib: explicitly disabled via build config 00:01:26.139 port: explicitly disabled via build config 00:01:26.139 pdump: explicitly disabled via build config 00:01:26.139 table: explicitly disabled via build config 00:01:26.139 pipeline: explicitly disabled via build config 00:01:26.139 graph: explicitly disabled via build config 00:01:26.139 node: explicitly disabled via build config 00:01:26.139 00:01:26.139 drivers: 00:01:26.139 common/cpt: not in enabled drivers build config 00:01:26.139 common/dpaax: not in enabled drivers build config 00:01:26.139 common/iavf: not in enabled drivers build config 00:01:26.139 common/idpf: not in enabled drivers build config 00:01:26.139 common/mvep: not in enabled drivers build config 00:01:26.139 common/octeontx: not in enabled drivers build config 00:01:26.139 bus/auxiliary: not in enabled drivers build config 00:01:26.139 bus/cdx: not in enabled drivers build config 00:01:26.139 bus/dpaa: not in enabled drivers build config 00:01:26.139 bus/fslmc: not in enabled drivers build config 00:01:26.139 bus/ifpga: not in enabled drivers build config 00:01:26.139 bus/platform: not in enabled drivers build config 00:01:26.139 bus/vmbus: not in enabled drivers build config 00:01:26.139 common/cnxk: not in enabled drivers build config 00:01:26.139 common/mlx5: not in enabled drivers build config 00:01:26.139 common/nfp: not in enabled drivers build config 00:01:26.139 common/qat: not in enabled drivers build config 00:01:26.139 common/sfc_efx: not in enabled drivers build config 00:01:26.139 mempool/bucket: not in enabled drivers build config 00:01:26.139 mempool/cnxk: not in enabled drivers build config 00:01:26.139 mempool/dpaa: not in enabled drivers build config 00:01:26.139 mempool/dpaa2: not in enabled drivers build config 00:01:26.139 mempool/octeontx: not in enabled drivers build config 00:01:26.139 mempool/stack: not in enabled drivers build config 00:01:26.139 dma/cnxk: not in enabled drivers build config 00:01:26.139 dma/dpaa: not in enabled drivers build config 00:01:26.139 dma/dpaa2: not in enabled drivers build config 00:01:26.139 dma/hisilicon: not in enabled drivers build config 00:01:26.139 dma/idxd: not in enabled drivers build config 00:01:26.139 dma/ioat: not in enabled drivers build config 00:01:26.139 dma/skeleton: not in enabled drivers build config 00:01:26.139 net/af_packet: not in enabled drivers build config 00:01:26.139 net/af_xdp: not in enabled drivers build config 00:01:26.139 net/ark: not in enabled drivers build config 00:01:26.139 net/atlantic: not in enabled drivers build config 00:01:26.139 net/avp: not in enabled drivers build config 00:01:26.139 net/axgbe: not in enabled drivers build config 00:01:26.139 net/bnx2x: not in enabled drivers build config 00:01:26.139 net/bnxt: not in enabled drivers build config 00:01:26.139 net/bonding: not in enabled drivers build config 00:01:26.139 net/cnxk: not in enabled drivers build config 00:01:26.139 net/cpfl: not in enabled drivers build config 00:01:26.139 net/cxgbe: not in enabled drivers build config 00:01:26.139 net/dpaa: not in enabled drivers build config 00:01:26.139 net/dpaa2: not in enabled drivers build config 00:01:26.139 net/e1000: not in enabled drivers build config 00:01:26.139 net/ena: not in enabled drivers build config 00:01:26.139 net/enetc: not in enabled drivers build config 00:01:26.139 net/enetfec: not in enabled drivers build config 00:01:26.139 net/enic: not in enabled drivers build config 00:01:26.139 net/failsafe: not in enabled drivers build config 00:01:26.139 net/fm10k: not in enabled drivers build config 00:01:26.139 net/gve: not in enabled drivers build config 00:01:26.139 net/hinic: not in enabled drivers build config 00:01:26.139 net/hns3: not in enabled drivers build config 00:01:26.139 net/i40e: not in enabled drivers build config 00:01:26.139 net/iavf: not in enabled drivers build config 00:01:26.139 net/ice: not in enabled drivers build config 00:01:26.139 net/idpf: not in enabled drivers build config 00:01:26.139 net/igc: not in enabled drivers build config 00:01:26.139 net/ionic: not in enabled drivers build config 00:01:26.139 net/ipn3ke: not in enabled drivers build config 00:01:26.139 net/ixgbe: not in enabled drivers build config 00:01:26.139 net/mana: not in enabled drivers build config 00:01:26.139 net/memif: not in enabled drivers build config 00:01:26.139 net/mlx4: not in enabled drivers build config 00:01:26.139 net/mlx5: not in enabled drivers build config 00:01:26.139 net/mvneta: not in enabled drivers build config 00:01:26.139 net/mvpp2: not in enabled drivers build config 00:01:26.139 net/netvsc: not in enabled drivers build config 00:01:26.139 net/nfb: not in enabled drivers build config 00:01:26.139 net/nfp: not in enabled drivers build config 00:01:26.139 net/ngbe: not in enabled drivers build config 00:01:26.139 net/null: not in enabled drivers build config 00:01:26.139 net/octeontx: not in enabled drivers build config 00:01:26.139 net/octeon_ep: not in enabled drivers build config 00:01:26.139 net/pcap: not in enabled drivers build config 00:01:26.139 net/pfe: not in enabled drivers build config 00:01:26.139 net/qede: not in enabled drivers build config 00:01:26.139 net/ring: not in enabled drivers build config 00:01:26.139 net/sfc: not in enabled drivers build config 00:01:26.139 net/softnic: not in enabled drivers build config 00:01:26.139 net/tap: not in enabled drivers build config 00:01:26.139 net/thunderx: not in enabled drivers build config 00:01:26.139 net/txgbe: not in enabled drivers build config 00:01:26.139 net/vdev_netvsc: not in enabled drivers build config 00:01:26.139 net/vhost: not in enabled drivers build config 00:01:26.139 net/virtio: not in enabled drivers build config 00:01:26.139 net/vmxnet3: not in enabled drivers build config 00:01:26.139 raw/*: missing internal dependency, "rawdev" 00:01:26.139 crypto/armv8: not in enabled drivers build config 00:01:26.139 crypto/bcmfs: not in enabled drivers build config 00:01:26.139 crypto/caam_jr: not in enabled drivers build config 00:01:26.139 crypto/ccp: not in enabled drivers build config 00:01:26.139 crypto/cnxk: not in enabled drivers build config 00:01:26.139 crypto/dpaa_sec: not in enabled drivers build config 00:01:26.139 crypto/dpaa2_sec: not in enabled drivers build config 00:01:26.139 crypto/ipsec_mb: not in enabled drivers build config 00:01:26.139 crypto/mlx5: not in enabled drivers build config 00:01:26.139 crypto/mvsam: not in enabled drivers build config 00:01:26.139 crypto/nitrox: not in enabled drivers build config 00:01:26.139 crypto/null: not in enabled drivers build config 00:01:26.139 crypto/octeontx: not in enabled drivers build config 00:01:26.139 crypto/openssl: not in enabled drivers build config 00:01:26.139 crypto/scheduler: not in enabled drivers build config 00:01:26.139 crypto/uadk: not in enabled drivers build config 00:01:26.139 crypto/virtio: not in enabled drivers build config 00:01:26.139 compress/isal: not in enabled drivers build config 00:01:26.139 compress/mlx5: not in enabled drivers build config 00:01:26.139 compress/octeontx: not in enabled drivers build config 00:01:26.139 compress/zlib: not in enabled drivers build config 00:01:26.139 regex/*: missing internal dependency, "regexdev" 00:01:26.139 ml/*: missing internal dependency, "mldev" 00:01:26.139 vdpa/ifc: not in enabled drivers build config 00:01:26.139 vdpa/mlx5: not in enabled drivers build config 00:01:26.139 vdpa/nfp: not in enabled drivers build config 00:01:26.139 vdpa/sfc: not in enabled drivers build config 00:01:26.139 event/*: missing internal dependency, "eventdev" 00:01:26.139 baseband/*: missing internal dependency, "bbdev" 00:01:26.139 gpu/*: missing internal dependency, "gpudev" 00:01:26.139 00:01:26.139 00:01:26.398 Build targets in project: 85 00:01:26.398 00:01:26.398 DPDK 23.11.0 00:01:26.398 00:01:26.398 User defined options 00:01:26.398 buildtype : debug 00:01:26.398 default_library : shared 00:01:26.398 libdir : lib 00:01:26.398 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:26.398 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:26.398 c_link_args : 00:01:26.398 cpu_instruction_set: native 00:01:26.398 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:26.398 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:26.398 enable_docs : false 00:01:26.398 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:26.398 enable_kmods : false 00:01:26.398 tests : false 00:01:26.398 00:01:26.398 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.659 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:26.923 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:26.923 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:26.923 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:26.923 [4/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:26.923 [5/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:26.923 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:26.923 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:26.923 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:26.923 [9/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:26.923 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:26.923 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:26.923 [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:26.923 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:26.923 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:26.923 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:26.923 [16/265] Linking static target lib/librte_kvargs.a 00:01:26.923 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:26.923 [18/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:26.923 [19/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:26.923 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:26.923 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:26.923 [22/265] Linking static target lib/librte_log.a 00:01:26.923 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:26.923 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:26.923 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:26.923 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:26.923 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:27.185 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:27.185 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:27.185 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:27.185 [31/265] Linking static target lib/librte_pci.a 00:01:27.185 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:27.185 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:27.185 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:27.185 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:27.185 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:27.185 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:27.185 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:27.185 [39/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:27.185 [40/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:27.185 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:27.444 [42/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:27.444 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:27.444 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:27.444 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:27.444 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:27.444 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:27.444 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:27.444 [49/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.444 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:27.444 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:27.444 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:27.444 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:27.444 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:27.444 [55/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:27.444 [56/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.444 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:27.444 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:27.444 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:27.444 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:27.444 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:27.444 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:27.444 [63/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:27.444 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:27.444 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:27.444 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:27.444 [67/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:27.444 [68/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:27.444 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:27.444 [70/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:27.444 [71/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:27.444 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:27.444 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:27.444 [74/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:27.444 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:27.444 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:27.444 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:27.444 [78/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:27.444 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:27.444 [80/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:27.444 [81/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:27.444 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:27.444 [83/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:27.444 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:27.444 [85/265] Linking static target lib/librte_meter.a 00:01:27.444 [86/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:27.444 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:27.444 [88/265] Linking static target lib/librte_ring.a 00:01:27.444 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:27.444 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:27.444 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:27.444 [92/265] Linking static target lib/librte_telemetry.a 00:01:27.444 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:27.444 [94/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:27.444 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:27.444 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:27.444 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:27.444 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:27.444 [99/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:27.444 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:27.444 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:27.444 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:27.444 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:27.444 [104/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:27.444 [105/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:27.444 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:27.444 [107/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:27.444 [108/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:27.444 [109/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:27.444 [110/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:27.444 [111/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:27.444 [112/265] Linking static target lib/librte_cmdline.a 00:01:27.444 [113/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:27.703 [114/265] Linking static target lib/librte_net.a 00:01:27.703 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:27.703 [116/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:27.703 [117/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:27.703 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:27.703 [119/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:27.703 [120/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:27.703 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:27.703 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:27.703 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:27.703 [124/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:27.703 [125/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:27.703 [126/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:27.703 [127/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:27.703 [128/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:27.703 [129/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:27.703 [130/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:27.703 [131/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:27.703 [132/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.703 [133/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.703 [134/265] Linking static target lib/librte_timer.a 00:01:27.703 [135/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:27.703 [136/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:27.703 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:27.703 [138/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.703 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:27.703 [140/265] Linking static target lib/librte_eal.a 00:01:27.703 [141/265] Linking static target lib/librte_dmadev.a 00:01:27.703 [142/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:27.703 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:27.703 [144/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:27.703 [145/265] Linking static target lib/librte_mempool.a 00:01:27.703 [146/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:27.703 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:27.703 [148/265] Linking static target lib/librte_rcu.a 00:01:27.703 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:27.703 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:27.703 [151/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:27.703 [152/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.703 [153/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:27.703 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:27.703 [155/265] Linking static target lib/librte_compressdev.a 00:01:27.703 [156/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:27.703 [157/265] Linking static target lib/librte_reorder.a 00:01:27.703 [158/265] Linking static target lib/librte_power.a 00:01:27.703 [159/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:27.703 [160/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:27.703 [161/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.703 [162/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:27.703 [163/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.703 [164/265] Linking static target lib/librte_mbuf.a 00:01:27.703 [165/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.703 [166/265] Linking static target lib/librte_security.a 00:01:27.703 [167/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:27.703 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:27.703 [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:27.703 [170/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:27.703 [171/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:27.703 [172/265] Linking target lib/librte_log.so.24.0 00:01:27.703 [173/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:27.703 [174/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.961 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:27.961 [176/265] Linking static target lib/librte_hash.a 00:01:27.961 [177/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.961 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:27.961 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:27.961 [180/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:27.961 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:27.961 [182/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:27.961 [183/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.961 [184/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:27.961 [185/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:27.961 [186/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:27.961 [187/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:27.962 [188/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:27.962 [189/265] Linking static target lib/librte_cryptodev.a 00:01:27.962 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:27.962 [191/265] Linking target lib/librte_kvargs.so.24.0 00:01:27.962 [192/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.962 [193/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.962 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:27.962 [195/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:27.962 [196/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.962 [197/265] Linking static target drivers/librte_bus_vdev.a 00:01:27.962 [198/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.962 [199/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:27.962 [200/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.220 [201/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:28.220 [202/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:28.220 [203/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.220 [204/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:28.220 [205/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.220 [206/265] Linking static target drivers/librte_bus_pci.a 00:01:28.220 [207/265] Linking target lib/librte_telemetry.so.24.0 00:01:28.220 [208/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.220 [209/265] Linking static target drivers/librte_mempool_ring.a 00:01:28.220 [210/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:28.220 [211/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.220 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:28.479 [213/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:28.479 [214/265] Linking static target lib/librte_ethdev.a 00:01:28.479 [215/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.479 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.479 [217/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.479 [218/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.479 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:28.479 [220/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.738 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.738 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.738 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.996 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.562 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:29.562 [226/265] Linking static target lib/librte_vhost.a 00:01:30.128 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.031 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.596 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.499 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.499 [231/265] Linking target lib/librte_eal.so.24.0 00:01:40.499 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:40.499 [233/265] Linking target lib/librte_dmadev.so.24.0 00:01:40.499 [234/265] Linking target lib/librte_meter.so.24.0 00:01:40.499 [235/265] Linking target lib/librte_ring.so.24.0 00:01:40.499 [236/265] Linking target lib/librte_pci.so.24.0 00:01:40.499 [237/265] Linking target lib/librte_timer.so.24.0 00:01:40.499 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:40.757 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:40.757 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:40.757 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:40.757 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:40.757 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:40.757 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:40.757 [245/265] Linking target lib/librte_rcu.so.24.0 00:01:40.757 [246/265] Linking target lib/librte_mempool.so.24.0 00:01:40.757 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:40.757 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:40.757 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:40.757 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:41.014 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:41.014 [252/265] Linking target lib/librte_reorder.so.24.0 00:01:41.014 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:41.014 [254/265] Linking target lib/librte_net.so.24.0 00:01:41.014 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:41.271 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:41.271 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:41.271 [258/265] Linking target lib/librte_hash.so.24.0 00:01:41.271 [259/265] Linking target lib/librte_security.so.24.0 00:01:41.271 [260/265] Linking target lib/librte_cmdline.so.24.0 00:01:41.271 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:41.271 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:41.271 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:41.529 [264/265] Linking target lib/librte_power.so.24.0 00:01:41.529 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:41.529 INFO: autodetecting backend as ninja 00:01:41.529 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:42.463 CC lib/log/log.o 00:01:42.463 CC lib/log/log_flags.o 00:01:42.463 CC lib/log/log_deprecated.o 00:01:42.463 CC lib/ut_mock/mock.o 00:01:42.463 CC lib/ut/ut.o 00:01:42.721 LIB libspdk_ut_mock.a 00:01:42.721 LIB libspdk_log.a 00:01:42.721 LIB libspdk_ut.a 00:01:42.721 SO libspdk_ut_mock.so.6.0 00:01:42.721 SO libspdk_log.so.7.0 00:01:42.721 SO libspdk_ut.so.2.0 00:01:42.721 SYMLINK libspdk_ut_mock.so 00:01:42.721 SYMLINK libspdk_log.so 00:01:42.721 SYMLINK libspdk_ut.so 00:01:43.289 CXX lib/trace_parser/trace.o 00:01:43.289 CC lib/ioat/ioat.o 00:01:43.289 CC lib/dma/dma.o 00:01:43.289 CC lib/util/base64.o 00:01:43.289 CC lib/util/bit_array.o 00:01:43.289 CC lib/util/cpuset.o 00:01:43.289 CC lib/util/crc32.o 00:01:43.289 CC lib/util/crc16.o 00:01:43.289 CC lib/util/crc32c.o 00:01:43.289 CC lib/util/crc32_ieee.o 00:01:43.289 CC lib/util/crc64.o 00:01:43.289 CC lib/util/dif.o 00:01:43.289 CC lib/util/fd.o 00:01:43.289 CC lib/util/hexlify.o 00:01:43.289 CC lib/util/file.o 00:01:43.289 CC lib/util/iov.o 00:01:43.289 CC lib/util/math.o 00:01:43.289 CC lib/util/pipe.o 00:01:43.289 CC lib/util/string.o 00:01:43.289 CC lib/util/strerror_tls.o 00:01:43.289 CC lib/util/uuid.o 00:01:43.289 CC lib/util/fd_group.o 00:01:43.289 CC lib/util/xor.o 00:01:43.289 CC lib/util/zipf.o 00:01:43.289 CC lib/vfio_user/host/vfio_user.o 00:01:43.289 CC lib/vfio_user/host/vfio_user_pci.o 00:01:43.289 LIB libspdk_dma.a 00:01:43.289 SO libspdk_dma.so.4.0 00:01:43.548 LIB libspdk_ioat.a 00:01:43.548 SO libspdk_ioat.so.7.0 00:01:43.548 SYMLINK libspdk_dma.so 00:01:43.548 SYMLINK libspdk_ioat.so 00:01:43.548 LIB libspdk_vfio_user.a 00:01:43.548 SO libspdk_vfio_user.so.5.0 00:01:43.548 LIB libspdk_util.a 00:01:43.548 SYMLINK libspdk_vfio_user.so 00:01:43.807 SO libspdk_util.so.9.0 00:01:43.807 LIB libspdk_trace_parser.a 00:01:43.807 SYMLINK libspdk_util.so 00:01:43.807 SO libspdk_trace_parser.so.5.0 00:01:43.807 SYMLINK libspdk_trace_parser.so 00:01:44.066 CC lib/idxd/idxd_user.o 00:01:44.066 CC lib/idxd/idxd.o 00:01:44.066 CC lib/vmd/vmd.o 00:01:44.066 CC lib/vmd/led.o 00:01:44.066 CC lib/conf/conf.o 00:01:44.066 CC lib/json/json_parse.o 00:01:44.066 CC lib/json/json_write.o 00:01:44.066 CC lib/rdma/common.o 00:01:44.066 CC lib/json/json_util.o 00:01:44.066 CC lib/env_dpdk/env.o 00:01:44.066 CC lib/rdma/rdma_verbs.o 00:01:44.066 CC lib/env_dpdk/memory.o 00:01:44.066 CC lib/env_dpdk/pci.o 00:01:44.066 CC lib/env_dpdk/threads.o 00:01:44.066 CC lib/env_dpdk/init.o 00:01:44.066 CC lib/env_dpdk/pci_ioat.o 00:01:44.066 CC lib/env_dpdk/pci_virtio.o 00:01:44.066 CC lib/env_dpdk/pci_vmd.o 00:01:44.066 CC lib/env_dpdk/pci_idxd.o 00:01:44.066 CC lib/env_dpdk/pci_event.o 00:01:44.066 CC lib/env_dpdk/sigbus_handler.o 00:01:44.066 CC lib/env_dpdk/pci_dpdk.o 00:01:44.324 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:44.324 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:44.324 LIB libspdk_conf.a 00:01:44.324 SO libspdk_conf.so.6.0 00:01:44.324 LIB libspdk_rdma.a 00:01:44.324 LIB libspdk_json.a 00:01:44.582 SO libspdk_json.so.6.0 00:01:44.582 SO libspdk_rdma.so.6.0 00:01:44.582 SYMLINK libspdk_conf.so 00:01:44.582 SYMLINK libspdk_rdma.so 00:01:44.582 SYMLINK libspdk_json.so 00:01:44.582 LIB libspdk_idxd.a 00:01:44.582 SO libspdk_idxd.so.12.0 00:01:44.582 LIB libspdk_vmd.a 00:01:44.582 SYMLINK libspdk_idxd.so 00:01:44.582 SO libspdk_vmd.so.6.0 00:01:44.839 SYMLINK libspdk_vmd.so 00:01:44.839 CC lib/jsonrpc/jsonrpc_server.o 00:01:44.839 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:44.839 CC lib/jsonrpc/jsonrpc_client.o 00:01:44.839 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:45.096 LIB libspdk_jsonrpc.a 00:01:45.096 LIB libspdk_env_dpdk.a 00:01:45.096 SO libspdk_jsonrpc.so.6.0 00:01:45.354 SO libspdk_env_dpdk.so.14.0 00:01:45.354 SYMLINK libspdk_jsonrpc.so 00:01:45.354 SYMLINK libspdk_env_dpdk.so 00:01:45.613 CC lib/rpc/rpc.o 00:01:45.613 LIB libspdk_rpc.a 00:01:45.872 SO libspdk_rpc.so.6.0 00:01:45.872 SYMLINK libspdk_rpc.so 00:01:46.131 CC lib/trace/trace.o 00:01:46.131 CC lib/trace/trace_flags.o 00:01:46.131 CC lib/trace/trace_rpc.o 00:01:46.131 CC lib/keyring/keyring.o 00:01:46.131 CC lib/keyring/keyring_rpc.o 00:01:46.131 CC lib/notify/notify.o 00:01:46.131 CC lib/notify/notify_rpc.o 00:01:46.390 LIB libspdk_notify.a 00:01:46.390 LIB libspdk_trace.a 00:01:46.390 LIB libspdk_keyring.a 00:01:46.390 SO libspdk_notify.so.6.0 00:01:46.390 SO libspdk_trace.so.10.0 00:01:46.390 SO libspdk_keyring.so.1.0 00:01:46.390 SYMLINK libspdk_notify.so 00:01:46.390 SYMLINK libspdk_trace.so 00:01:46.390 SYMLINK libspdk_keyring.so 00:01:46.956 CC lib/thread/thread.o 00:01:46.956 CC lib/thread/iobuf.o 00:01:46.956 CC lib/sock/sock.o 00:01:46.956 CC lib/sock/sock_rpc.o 00:01:47.214 LIB libspdk_sock.a 00:01:47.214 SO libspdk_sock.so.9.0 00:01:47.214 SYMLINK libspdk_sock.so 00:01:47.472 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:47.472 CC lib/nvme/nvme_ctrlr.o 00:01:47.472 CC lib/nvme/nvme_fabric.o 00:01:47.472 CC lib/nvme/nvme_ns_cmd.o 00:01:47.472 CC lib/nvme/nvme_ns.o 00:01:47.472 CC lib/nvme/nvme_pcie_common.o 00:01:47.472 CC lib/nvme/nvme_pcie.o 00:01:47.730 CC lib/nvme/nvme_qpair.o 00:01:47.730 CC lib/nvme/nvme.o 00:01:47.730 CC lib/nvme/nvme_quirks.o 00:01:47.730 CC lib/nvme/nvme_transport.o 00:01:47.730 CC lib/nvme/nvme_discovery.o 00:01:47.731 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:47.731 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:47.731 CC lib/nvme/nvme_tcp.o 00:01:47.731 CC lib/nvme/nvme_opal.o 00:01:47.731 CC lib/nvme/nvme_io_msg.o 00:01:47.731 CC lib/nvme/nvme_poll_group.o 00:01:47.731 CC lib/nvme/nvme_zns.o 00:01:47.731 CC lib/nvme/nvme_stubs.o 00:01:47.731 CC lib/nvme/nvme_auth.o 00:01:47.731 CC lib/nvme/nvme_cuse.o 00:01:47.731 CC lib/nvme/nvme_vfio_user.o 00:01:47.731 CC lib/nvme/nvme_rdma.o 00:01:47.988 LIB libspdk_thread.a 00:01:47.988 SO libspdk_thread.so.10.0 00:01:47.988 SYMLINK libspdk_thread.so 00:01:48.246 CC lib/accel/accel.o 00:01:48.246 CC lib/vfu_tgt/tgt_rpc.o 00:01:48.246 CC lib/accel/accel_rpc.o 00:01:48.246 CC lib/vfu_tgt/tgt_endpoint.o 00:01:48.246 CC lib/accel/accel_sw.o 00:01:48.246 CC lib/blob/request.o 00:01:48.246 CC lib/blob/blobstore.o 00:01:48.246 CC lib/blob/zeroes.o 00:01:48.246 CC lib/blob/blob_bs_dev.o 00:01:48.246 CC lib/init/json_config.o 00:01:48.246 CC lib/init/subsystem.o 00:01:48.246 CC lib/init/rpc.o 00:01:48.246 CC lib/init/subsystem_rpc.o 00:01:48.246 CC lib/virtio/virtio.o 00:01:48.246 CC lib/virtio/virtio_vhost_user.o 00:01:48.246 CC lib/virtio/virtio_vfio_user.o 00:01:48.246 CC lib/virtio/virtio_pci.o 00:01:48.504 LIB libspdk_init.a 00:01:48.504 LIB libspdk_vfu_tgt.a 00:01:48.504 LIB libspdk_virtio.a 00:01:48.504 SO libspdk_vfu_tgt.so.3.0 00:01:48.504 SO libspdk_init.so.5.0 00:01:48.762 SO libspdk_virtio.so.7.0 00:01:48.762 SYMLINK libspdk_vfu_tgt.so 00:01:48.762 SYMLINK libspdk_init.so 00:01:48.762 SYMLINK libspdk_virtio.so 00:01:49.020 CC lib/event/reactor.o 00:01:49.020 CC lib/event/app.o 00:01:49.020 CC lib/event/log_rpc.o 00:01:49.020 CC lib/event/app_rpc.o 00:01:49.020 LIB libspdk_accel.a 00:01:49.020 CC lib/event/scheduler_static.o 00:01:49.020 SO libspdk_accel.so.15.0 00:01:49.020 SYMLINK libspdk_accel.so 00:01:49.020 LIB libspdk_nvme.a 00:01:49.279 SO libspdk_nvme.so.13.0 00:01:49.279 LIB libspdk_event.a 00:01:49.279 SO libspdk_event.so.13.0 00:01:49.537 SYMLINK libspdk_event.so 00:01:49.537 CC lib/bdev/bdev.o 00:01:49.537 CC lib/bdev/bdev_rpc.o 00:01:49.537 CC lib/bdev/bdev_zone.o 00:01:49.537 CC lib/bdev/part.o 00:01:49.537 CC lib/bdev/scsi_nvme.o 00:01:49.537 SYMLINK libspdk_nvme.so 00:01:50.471 LIB libspdk_blob.a 00:01:50.471 SO libspdk_blob.so.11.0 00:01:50.471 SYMLINK libspdk_blob.so 00:01:50.730 CC lib/blobfs/blobfs.o 00:01:50.730 CC lib/blobfs/tree.o 00:01:50.730 CC lib/lvol/lvol.o 00:01:51.296 LIB libspdk_bdev.a 00:01:51.296 SO libspdk_bdev.so.15.0 00:01:51.296 LIB libspdk_blobfs.a 00:01:51.296 SO libspdk_blobfs.so.10.0 00:01:51.296 SYMLINK libspdk_bdev.so 00:01:51.554 LIB libspdk_lvol.a 00:01:51.554 SO libspdk_lvol.so.10.0 00:01:51.554 SYMLINK libspdk_blobfs.so 00:01:51.554 SYMLINK libspdk_lvol.so 00:01:51.814 CC lib/nbd/nbd.o 00:01:51.814 CC lib/nbd/nbd_rpc.o 00:01:51.814 CC lib/scsi/dev.o 00:01:51.814 CC lib/scsi/lun.o 00:01:51.814 CC lib/scsi/port.o 00:01:51.814 CC lib/scsi/scsi.o 00:01:51.814 CC lib/nvmf/ctrlr.o 00:01:51.814 CC lib/scsi/scsi_bdev.o 00:01:51.814 CC lib/nvmf/ctrlr_discovery.o 00:01:51.814 CC lib/scsi/scsi_pr.o 00:01:51.814 CC lib/nvmf/ctrlr_bdev.o 00:01:51.814 CC lib/nvmf/subsystem.o 00:01:51.814 CC lib/scsi/scsi_rpc.o 00:01:51.814 CC lib/nvmf/nvmf.o 00:01:51.814 CC lib/scsi/task.o 00:01:51.814 CC lib/nvmf/nvmf_rpc.o 00:01:51.814 CC lib/nvmf/transport.o 00:01:51.814 CC lib/nvmf/tcp.o 00:01:51.814 CC lib/nvmf/stubs.o 00:01:51.814 CC lib/nvmf/mdns_server.o 00:01:51.814 CC lib/nvmf/vfio_user.o 00:01:51.814 CC lib/nvmf/auth.o 00:01:51.814 CC lib/nvmf/rdma.o 00:01:51.814 CC lib/ublk/ublk_rpc.o 00:01:51.814 CC lib/ublk/ublk.o 00:01:51.814 CC lib/ftl/ftl_core.o 00:01:51.814 CC lib/ftl/ftl_init.o 00:01:51.814 CC lib/ftl/ftl_layout.o 00:01:51.814 CC lib/ftl/ftl_debug.o 00:01:51.814 CC lib/ftl/ftl_io.o 00:01:51.814 CC lib/ftl/ftl_sb.o 00:01:51.814 CC lib/ftl/ftl_l2p_flat.o 00:01:51.814 CC lib/ftl/ftl_l2p.o 00:01:51.814 CC lib/ftl/ftl_nv_cache.o 00:01:51.814 CC lib/ftl/ftl_band.o 00:01:51.814 CC lib/ftl/ftl_band_ops.o 00:01:51.814 CC lib/ftl/ftl_writer.o 00:01:51.814 CC lib/ftl/ftl_rq.o 00:01:51.814 CC lib/ftl/ftl_reloc.o 00:01:51.814 CC lib/ftl/ftl_p2l.o 00:01:51.814 CC lib/ftl/ftl_l2p_cache.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:51.814 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:51.814 CC lib/ftl/utils/ftl_conf.o 00:01:51.814 CC lib/ftl/utils/ftl_mempool.o 00:01:51.814 CC lib/ftl/utils/ftl_md.o 00:01:51.814 CC lib/ftl/utils/ftl_property.o 00:01:51.814 CC lib/ftl/utils/ftl_bitmap.o 00:01:51.814 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:51.814 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:51.814 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:51.814 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:51.814 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:51.814 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:51.814 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:51.814 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:51.814 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:51.814 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:51.814 CC lib/ftl/base/ftl_base_bdev.o 00:01:51.814 CC lib/ftl/base/ftl_base_dev.o 00:01:51.814 CC lib/ftl/ftl_trace.o 00:01:52.381 LIB libspdk_nbd.a 00:01:52.381 SO libspdk_nbd.so.7.0 00:01:52.381 LIB libspdk_scsi.a 00:01:52.381 SYMLINK libspdk_nbd.so 00:01:52.381 SO libspdk_scsi.so.9.0 00:01:52.381 SYMLINK libspdk_scsi.so 00:01:52.381 LIB libspdk_ublk.a 00:01:52.639 SO libspdk_ublk.so.3.0 00:01:52.639 SYMLINK libspdk_ublk.so 00:01:52.639 LIB libspdk_ftl.a 00:01:52.899 SO libspdk_ftl.so.9.0 00:01:52.899 CC lib/vhost/vhost.o 00:01:52.899 CC lib/vhost/vhost_rpc.o 00:01:52.899 CC lib/vhost/vhost_scsi.o 00:01:52.899 CC lib/vhost/vhost_blk.o 00:01:52.899 CC lib/vhost/rte_vhost_user.o 00:01:52.899 CC lib/iscsi/init_grp.o 00:01:52.899 CC lib/iscsi/conn.o 00:01:52.899 CC lib/iscsi/param.o 00:01:52.899 CC lib/iscsi/iscsi.o 00:01:52.899 CC lib/iscsi/md5.o 00:01:52.899 CC lib/iscsi/tgt_node.o 00:01:52.899 CC lib/iscsi/portal_grp.o 00:01:52.899 CC lib/iscsi/iscsi_subsystem.o 00:01:52.899 CC lib/iscsi/iscsi_rpc.o 00:01:52.899 CC lib/iscsi/task.o 00:01:53.163 SYMLINK libspdk_ftl.so 00:01:53.421 LIB libspdk_nvmf.a 00:01:53.421 SO libspdk_nvmf.so.18.0 00:01:53.679 LIB libspdk_vhost.a 00:01:53.679 SO libspdk_vhost.so.8.0 00:01:53.679 SYMLINK libspdk_nvmf.so 00:01:53.679 SYMLINK libspdk_vhost.so 00:01:53.679 LIB libspdk_iscsi.a 00:01:53.937 SO libspdk_iscsi.so.8.0 00:01:53.937 SYMLINK libspdk_iscsi.so 00:01:54.503 CC module/env_dpdk/env_dpdk_rpc.o 00:01:54.503 CC module/vfu_device/vfu_virtio.o 00:01:54.503 CC module/vfu_device/vfu_virtio_blk.o 00:01:54.503 CC module/vfu_device/vfu_virtio_scsi.o 00:01:54.503 CC module/vfu_device/vfu_virtio_rpc.o 00:01:54.761 LIB libspdk_env_dpdk_rpc.a 00:01:54.761 CC module/blob/bdev/blob_bdev.o 00:01:54.761 CC module/accel/ioat/accel_ioat.o 00:01:54.761 CC module/accel/ioat/accel_ioat_rpc.o 00:01:54.761 CC module/accel/error/accel_error.o 00:01:54.761 CC module/accel/iaa/accel_iaa.o 00:01:54.761 CC module/accel/error/accel_error_rpc.o 00:01:54.761 CC module/accel/iaa/accel_iaa_rpc.o 00:01:54.761 CC module/scheduler/gscheduler/gscheduler.o 00:01:54.761 CC module/accel/dsa/accel_dsa.o 00:01:54.761 CC module/accel/dsa/accel_dsa_rpc.o 00:01:54.761 CC module/sock/posix/posix.o 00:01:54.761 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:54.761 CC module/keyring/file/keyring.o 00:01:54.761 CC module/keyring/file/keyring_rpc.o 00:01:54.761 SO libspdk_env_dpdk_rpc.so.6.0 00:01:54.761 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:54.761 SYMLINK libspdk_env_dpdk_rpc.so 00:01:54.761 LIB libspdk_accel_ioat.a 00:01:54.761 LIB libspdk_scheduler_gscheduler.a 00:01:54.761 LIB libspdk_accel_error.a 00:01:55.019 LIB libspdk_keyring_file.a 00:01:55.019 SO libspdk_accel_ioat.so.6.0 00:01:55.019 LIB libspdk_scheduler_dpdk_governor.a 00:01:55.019 LIB libspdk_accel_iaa.a 00:01:55.019 SO libspdk_scheduler_gscheduler.so.4.0 00:01:55.019 SO libspdk_accel_error.so.2.0 00:01:55.019 LIB libspdk_blob_bdev.a 00:01:55.019 LIB libspdk_scheduler_dynamic.a 00:01:55.019 LIB libspdk_accel_dsa.a 00:01:55.019 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:55.019 SO libspdk_keyring_file.so.1.0 00:01:55.019 SO libspdk_accel_iaa.so.3.0 00:01:55.019 SYMLINK libspdk_accel_ioat.so 00:01:55.019 SO libspdk_blob_bdev.so.11.0 00:01:55.019 SO libspdk_scheduler_dynamic.so.4.0 00:01:55.019 SYMLINK libspdk_scheduler_gscheduler.so 00:01:55.019 SO libspdk_accel_dsa.so.5.0 00:01:55.019 SYMLINK libspdk_accel_error.so 00:01:55.019 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:55.019 SYMLINK libspdk_keyring_file.so 00:01:55.019 SYMLINK libspdk_accel_iaa.so 00:01:55.019 SYMLINK libspdk_scheduler_dynamic.so 00:01:55.019 SYMLINK libspdk_blob_bdev.so 00:01:55.019 SYMLINK libspdk_accel_dsa.so 00:01:55.019 LIB libspdk_vfu_device.a 00:01:55.019 SO libspdk_vfu_device.so.3.0 00:01:55.279 SYMLINK libspdk_vfu_device.so 00:01:55.279 LIB libspdk_sock_posix.a 00:01:55.279 SO libspdk_sock_posix.so.6.0 00:01:55.560 SYMLINK libspdk_sock_posix.so 00:01:55.560 CC module/bdev/raid/bdev_raid_sb.o 00:01:55.560 CC module/bdev/raid/bdev_raid.o 00:01:55.560 CC module/bdev/raid/bdev_raid_rpc.o 00:01:55.560 CC module/bdev/raid/raid0.o 00:01:55.560 CC module/bdev/raid/raid1.o 00:01:55.560 CC module/bdev/raid/concat.o 00:01:55.560 CC module/bdev/passthru/vbdev_passthru.o 00:01:55.560 CC module/bdev/malloc/bdev_malloc.o 00:01:55.560 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:55.560 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:55.560 CC module/bdev/split/vbdev_split_rpc.o 00:01:55.560 CC module/bdev/error/vbdev_error.o 00:01:55.560 CC module/bdev/split/vbdev_split.o 00:01:55.560 CC module/bdev/error/vbdev_error_rpc.o 00:01:55.560 CC module/bdev/delay/vbdev_delay.o 00:01:55.560 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:55.560 CC module/bdev/lvol/vbdev_lvol.o 00:01:55.560 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:55.560 CC module/bdev/aio/bdev_aio.o 00:01:55.560 CC module/bdev/nvme/bdev_nvme.o 00:01:55.560 CC module/bdev/nvme/nvme_rpc.o 00:01:55.560 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:55.560 CC module/bdev/aio/bdev_aio_rpc.o 00:01:55.560 CC module/bdev/ftl/bdev_ftl.o 00:01:55.560 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:55.560 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:55.560 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:55.560 CC module/bdev/nvme/vbdev_opal.o 00:01:55.560 CC module/bdev/null/bdev_null.o 00:01:55.560 CC module/bdev/nvme/bdev_mdns_client.o 00:01:55.560 CC module/bdev/null/bdev_null_rpc.o 00:01:55.560 CC module/bdev/gpt/gpt.o 00:01:55.560 CC module/bdev/gpt/vbdev_gpt.o 00:01:55.560 CC module/blobfs/bdev/blobfs_bdev.o 00:01:55.560 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:55.560 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:55.560 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:55.561 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:55.561 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:55.561 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:55.561 CC module/bdev/iscsi/bdev_iscsi.o 00:01:55.561 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:55.826 LIB libspdk_blobfs_bdev.a 00:01:55.826 LIB libspdk_bdev_split.a 00:01:55.826 LIB libspdk_bdev_null.a 00:01:55.826 LIB libspdk_bdev_error.a 00:01:55.826 LIB libspdk_bdev_gpt.a 00:01:55.826 LIB libspdk_bdev_passthru.a 00:01:55.826 SO libspdk_blobfs_bdev.so.6.0 00:01:55.826 LIB libspdk_bdev_ftl.a 00:01:55.826 SO libspdk_bdev_split.so.6.0 00:01:55.826 SO libspdk_bdev_null.so.6.0 00:01:55.826 SO libspdk_bdev_error.so.6.0 00:01:55.826 LIB libspdk_bdev_aio.a 00:01:55.826 LIB libspdk_bdev_malloc.a 00:01:55.826 SO libspdk_bdev_gpt.so.6.0 00:01:55.826 LIB libspdk_bdev_zone_block.a 00:01:55.826 SO libspdk_bdev_passthru.so.6.0 00:01:55.826 SO libspdk_bdev_ftl.so.6.0 00:01:55.826 LIB libspdk_bdev_delay.a 00:01:55.826 SO libspdk_bdev_aio.so.6.0 00:01:55.826 SYMLINK libspdk_blobfs_bdev.so 00:01:55.826 SYMLINK libspdk_bdev_split.so 00:01:55.826 SO libspdk_bdev_malloc.so.6.0 00:01:55.826 SYMLINK libspdk_bdev_error.so 00:01:55.826 SYMLINK libspdk_bdev_null.so 00:01:55.826 SO libspdk_bdev_zone_block.so.6.0 00:01:55.826 SO libspdk_bdev_delay.so.6.0 00:01:55.826 LIB libspdk_bdev_iscsi.a 00:01:56.084 SYMLINK libspdk_bdev_passthru.so 00:01:56.084 SYMLINK libspdk_bdev_gpt.so 00:01:56.084 SYMLINK libspdk_bdev_ftl.so 00:01:56.084 SO libspdk_bdev_iscsi.so.6.0 00:01:56.084 SYMLINK libspdk_bdev_aio.so 00:01:56.084 SYMLINK libspdk_bdev_malloc.so 00:01:56.084 SYMLINK libspdk_bdev_zone_block.so 00:01:56.084 LIB libspdk_bdev_lvol.a 00:01:56.084 SYMLINK libspdk_bdev_delay.so 00:01:56.084 SYMLINK libspdk_bdev_iscsi.so 00:01:56.084 LIB libspdk_bdev_virtio.a 00:01:56.084 SO libspdk_bdev_lvol.so.6.0 00:01:56.084 SO libspdk_bdev_virtio.so.6.0 00:01:56.084 SYMLINK libspdk_bdev_lvol.so 00:01:56.084 SYMLINK libspdk_bdev_virtio.so 00:01:56.342 LIB libspdk_bdev_raid.a 00:01:56.342 SO libspdk_bdev_raid.so.6.0 00:01:56.342 SYMLINK libspdk_bdev_raid.so 00:01:57.277 LIB libspdk_bdev_nvme.a 00:01:57.277 SO libspdk_bdev_nvme.so.7.0 00:01:57.277 SYMLINK libspdk_bdev_nvme.so 00:01:58.212 CC module/event/subsystems/iobuf/iobuf.o 00:01:58.212 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:58.212 CC module/event/subsystems/vmd/vmd.o 00:01:58.212 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:58.212 CC module/event/subsystems/sock/sock.o 00:01:58.212 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:58.212 CC module/event/subsystems/scheduler/scheduler.o 00:01:58.212 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:58.212 CC module/event/subsystems/keyring/keyring.o 00:01:58.212 LIB libspdk_event_vfu_tgt.a 00:01:58.212 LIB libspdk_event_iobuf.a 00:01:58.212 LIB libspdk_event_sock.a 00:01:58.212 LIB libspdk_event_vmd.a 00:01:58.212 LIB libspdk_event_scheduler.a 00:01:58.212 LIB libspdk_event_keyring.a 00:01:58.212 LIB libspdk_event_vhost_blk.a 00:01:58.212 SO libspdk_event_vfu_tgt.so.3.0 00:01:58.212 SO libspdk_event_iobuf.so.3.0 00:01:58.212 SO libspdk_event_scheduler.so.4.0 00:01:58.212 SO libspdk_event_sock.so.5.0 00:01:58.212 SO libspdk_event_vhost_blk.so.3.0 00:01:58.212 SO libspdk_event_vmd.so.6.0 00:01:58.212 SO libspdk_event_keyring.so.1.0 00:01:58.212 SYMLINK libspdk_event_sock.so 00:01:58.212 SYMLINK libspdk_event_vfu_tgt.so 00:01:58.212 SYMLINK libspdk_event_scheduler.so 00:01:58.212 SYMLINK libspdk_event_iobuf.so 00:01:58.212 SYMLINK libspdk_event_vhost_blk.so 00:01:58.212 SYMLINK libspdk_event_vmd.so 00:01:58.212 SYMLINK libspdk_event_keyring.so 00:01:58.780 CC module/event/subsystems/accel/accel.o 00:01:58.780 LIB libspdk_event_accel.a 00:01:58.780 SO libspdk_event_accel.so.6.0 00:01:58.780 SYMLINK libspdk_event_accel.so 00:01:59.346 CC module/event/subsystems/bdev/bdev.o 00:01:59.346 LIB libspdk_event_bdev.a 00:01:59.604 SO libspdk_event_bdev.so.6.0 00:01:59.604 SYMLINK libspdk_event_bdev.so 00:01:59.862 CC module/event/subsystems/nbd/nbd.o 00:01:59.862 CC module/event/subsystems/scsi/scsi.o 00:01:59.862 CC module/event/subsystems/ublk/ublk.o 00:01:59.862 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:59.862 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.120 LIB libspdk_event_nbd.a 00:02:00.120 SO libspdk_event_nbd.so.6.0 00:02:00.120 LIB libspdk_event_ublk.a 00:02:00.120 LIB libspdk_event_scsi.a 00:02:00.120 SYMLINK libspdk_event_nbd.so 00:02:00.120 SO libspdk_event_scsi.so.6.0 00:02:00.120 SO libspdk_event_ublk.so.3.0 00:02:00.120 LIB libspdk_event_nvmf.a 00:02:00.120 SYMLINK libspdk_event_scsi.so 00:02:00.120 SYMLINK libspdk_event_ublk.so 00:02:00.120 SO libspdk_event_nvmf.so.6.0 00:02:00.120 SYMLINK libspdk_event_nvmf.so 00:02:00.378 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:00.378 CC module/event/subsystems/iscsi/iscsi.o 00:02:00.636 LIB libspdk_event_vhost_scsi.a 00:02:00.636 LIB libspdk_event_iscsi.a 00:02:00.636 SO libspdk_event_vhost_scsi.so.3.0 00:02:00.636 SO libspdk_event_iscsi.so.6.0 00:02:00.636 SYMLINK libspdk_event_vhost_scsi.so 00:02:00.636 SYMLINK libspdk_event_iscsi.so 00:02:00.894 SO libspdk.so.6.0 00:02:00.894 SYMLINK libspdk.so 00:02:01.478 CC app/trace_record/trace_record.o 00:02:01.478 CC app/spdk_nvme_identify/identify.o 00:02:01.478 CC app/spdk_nvme_discover/discovery_aer.o 00:02:01.478 CC app/spdk_nvme_perf/perf.o 00:02:01.478 CXX app/trace/trace.o 00:02:01.478 CC test/rpc_client/rpc_client_test.o 00:02:01.478 CC app/spdk_lspci/spdk_lspci.o 00:02:01.478 TEST_HEADER include/spdk/accel.h 00:02:01.478 TEST_HEADER include/spdk/accel_module.h 00:02:01.478 TEST_HEADER include/spdk/barrier.h 00:02:01.478 TEST_HEADER include/spdk/assert.h 00:02:01.478 TEST_HEADER include/spdk/bdev_module.h 00:02:01.478 TEST_HEADER include/spdk/bdev.h 00:02:01.478 TEST_HEADER include/spdk/base64.h 00:02:01.478 TEST_HEADER include/spdk/bdev_zone.h 00:02:01.478 TEST_HEADER include/spdk/bit_array.h 00:02:01.478 TEST_HEADER include/spdk/bit_pool.h 00:02:01.478 TEST_HEADER include/spdk/blob_bdev.h 00:02:01.478 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:01.478 TEST_HEADER include/spdk/blob.h 00:02:01.478 TEST_HEADER include/spdk/blobfs.h 00:02:01.478 TEST_HEADER include/spdk/config.h 00:02:01.478 CC app/spdk_top/spdk_top.o 00:02:01.478 TEST_HEADER include/spdk/conf.h 00:02:01.478 TEST_HEADER include/spdk/cpuset.h 00:02:01.478 TEST_HEADER include/spdk/crc32.h 00:02:01.478 TEST_HEADER include/spdk/crc16.h 00:02:01.478 TEST_HEADER include/spdk/crc64.h 00:02:01.478 TEST_HEADER include/spdk/dif.h 00:02:01.478 TEST_HEADER include/spdk/dma.h 00:02:01.478 TEST_HEADER include/spdk/env_dpdk.h 00:02:01.478 TEST_HEADER include/spdk/endian.h 00:02:01.478 TEST_HEADER include/spdk/env.h 00:02:01.478 TEST_HEADER include/spdk/event.h 00:02:01.478 TEST_HEADER include/spdk/fd.h 00:02:01.478 TEST_HEADER include/spdk/fd_group.h 00:02:01.478 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:01.478 TEST_HEADER include/spdk/file.h 00:02:01.478 TEST_HEADER include/spdk/ftl.h 00:02:01.478 TEST_HEADER include/spdk/gpt_spec.h 00:02:01.478 CC app/iscsi_tgt/iscsi_tgt.o 00:02:01.478 TEST_HEADER include/spdk/hexlify.h 00:02:01.478 TEST_HEADER include/spdk/idxd.h 00:02:01.478 TEST_HEADER include/spdk/histogram_data.h 00:02:01.478 TEST_HEADER include/spdk/init.h 00:02:01.478 TEST_HEADER include/spdk/idxd_spec.h 00:02:01.478 TEST_HEADER include/spdk/ioat.h 00:02:01.478 TEST_HEADER include/spdk/ioat_spec.h 00:02:01.478 TEST_HEADER include/spdk/iscsi_spec.h 00:02:01.478 TEST_HEADER include/spdk/json.h 00:02:01.478 TEST_HEADER include/spdk/jsonrpc.h 00:02:01.478 TEST_HEADER include/spdk/keyring.h 00:02:01.478 TEST_HEADER include/spdk/likely.h 00:02:01.478 TEST_HEADER include/spdk/keyring_module.h 00:02:01.478 TEST_HEADER include/spdk/log.h 00:02:01.478 TEST_HEADER include/spdk/memory.h 00:02:01.478 TEST_HEADER include/spdk/lvol.h 00:02:01.478 TEST_HEADER include/spdk/mmio.h 00:02:01.478 TEST_HEADER include/spdk/nbd.h 00:02:01.478 TEST_HEADER include/spdk/notify.h 00:02:01.478 TEST_HEADER include/spdk/nvme.h 00:02:01.478 TEST_HEADER include/spdk/nvme_intel.h 00:02:01.478 CC app/nvmf_tgt/nvmf_main.o 00:02:01.478 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:01.478 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:01.478 CC app/vhost/vhost.o 00:02:01.478 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:01.478 TEST_HEADER include/spdk/nvme_spec.h 00:02:01.478 TEST_HEADER include/spdk/nvme_zns.h 00:02:01.478 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:01.478 TEST_HEADER include/spdk/nvmf.h 00:02:01.478 TEST_HEADER include/spdk/nvmf_transport.h 00:02:01.478 TEST_HEADER include/spdk/nvmf_spec.h 00:02:01.478 TEST_HEADER include/spdk/opal.h 00:02:01.478 TEST_HEADER include/spdk/opal_spec.h 00:02:01.478 TEST_HEADER include/spdk/pci_ids.h 00:02:01.478 TEST_HEADER include/spdk/pipe.h 00:02:01.478 TEST_HEADER include/spdk/queue.h 00:02:01.478 CC app/spdk_dd/spdk_dd.o 00:02:01.478 TEST_HEADER include/spdk/reduce.h 00:02:01.478 TEST_HEADER include/spdk/rpc.h 00:02:01.478 TEST_HEADER include/spdk/scheduler.h 00:02:01.478 TEST_HEADER include/spdk/scsi.h 00:02:01.478 TEST_HEADER include/spdk/scsi_spec.h 00:02:01.478 TEST_HEADER include/spdk/sock.h 00:02:01.478 TEST_HEADER include/spdk/stdinc.h 00:02:01.478 TEST_HEADER include/spdk/string.h 00:02:01.478 TEST_HEADER include/spdk/thread.h 00:02:01.478 TEST_HEADER include/spdk/trace.h 00:02:01.478 TEST_HEADER include/spdk/trace_parser.h 00:02:01.478 TEST_HEADER include/spdk/tree.h 00:02:01.478 TEST_HEADER include/spdk/util.h 00:02:01.478 TEST_HEADER include/spdk/ublk.h 00:02:01.478 TEST_HEADER include/spdk/uuid.h 00:02:01.478 TEST_HEADER include/spdk/version.h 00:02:01.478 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:01.478 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:01.478 TEST_HEADER include/spdk/vhost.h 00:02:01.478 TEST_HEADER include/spdk/vmd.h 00:02:01.478 TEST_HEADER include/spdk/xor.h 00:02:01.478 TEST_HEADER include/spdk/zipf.h 00:02:01.478 CXX test/cpp_headers/accel.o 00:02:01.478 CXX test/cpp_headers/accel_module.o 00:02:01.478 CXX test/cpp_headers/assert.o 00:02:01.478 CXX test/cpp_headers/base64.o 00:02:01.478 CXX test/cpp_headers/barrier.o 00:02:01.478 CXX test/cpp_headers/bdev.o 00:02:01.478 CXX test/cpp_headers/bdev_module.o 00:02:01.478 CXX test/cpp_headers/bdev_zone.o 00:02:01.478 CXX test/cpp_headers/bit_pool.o 00:02:01.478 CXX test/cpp_headers/bit_array.o 00:02:01.478 CXX test/cpp_headers/blob_bdev.o 00:02:01.478 CC app/spdk_tgt/spdk_tgt.o 00:02:01.478 CXX test/cpp_headers/blobfs.o 00:02:01.478 CXX test/cpp_headers/blobfs_bdev.o 00:02:01.478 CXX test/cpp_headers/blob.o 00:02:01.478 CXX test/cpp_headers/conf.o 00:02:01.478 CXX test/cpp_headers/config.o 00:02:01.478 CXX test/cpp_headers/cpuset.o 00:02:01.478 CXX test/cpp_headers/crc16.o 00:02:01.478 CXX test/cpp_headers/crc32.o 00:02:01.478 CXX test/cpp_headers/crc64.o 00:02:01.478 CXX test/cpp_headers/dif.o 00:02:01.478 CXX test/cpp_headers/dma.o 00:02:01.478 CXX test/cpp_headers/endian.o 00:02:01.478 CXX test/cpp_headers/env_dpdk.o 00:02:01.478 CXX test/cpp_headers/env.o 00:02:01.478 CXX test/cpp_headers/event.o 00:02:01.478 CXX test/cpp_headers/fd_group.o 00:02:01.478 CXX test/cpp_headers/fd.o 00:02:01.478 CXX test/cpp_headers/file.o 00:02:01.478 CXX test/cpp_headers/ftl.o 00:02:01.478 CXX test/cpp_headers/gpt_spec.o 00:02:01.478 CXX test/cpp_headers/hexlify.o 00:02:01.478 CXX test/cpp_headers/histogram_data.o 00:02:01.478 CXX test/cpp_headers/idxd.o 00:02:01.478 CXX test/cpp_headers/idxd_spec.o 00:02:01.478 CXX test/cpp_headers/init.o 00:02:01.478 CXX test/cpp_headers/ioat.o 00:02:01.478 CC examples/ioat/verify/verify.o 00:02:01.478 CC examples/idxd/perf/perf.o 00:02:01.478 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.478 CC examples/ioat/perf/perf.o 00:02:01.478 CC examples/vmd/led/led.o 00:02:01.478 CC examples/nvme/hotplug/hotplug.o 00:02:01.478 CC examples/nvme/reconnect/reconnect.o 00:02:01.478 CC examples/sock/hello_world/hello_sock.o 00:02:01.478 CC test/event/event_perf/event_perf.o 00:02:01.478 CC examples/nvme/abort/abort.o 00:02:01.478 CC examples/nvme/arbitration/arbitration.o 00:02:01.478 CC test/app/histogram_perf/histogram_perf.o 00:02:01.478 CC examples/nvme/hello_world/hello_world.o 00:02:01.478 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:01.478 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:01.478 CC test/env/vtophys/vtophys.o 00:02:01.478 CC examples/nvmf/nvmf/nvmf.o 00:02:01.478 CC test/event/reactor/reactor.o 00:02:01.478 CC test/event/reactor_perf/reactor_perf.o 00:02:01.478 CC test/app/stub/stub.o 00:02:01.478 CC test/thread/poller_perf/poller_perf.o 00:02:01.478 CC test/env/pci/pci_ut.o 00:02:01.478 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:01.478 CC examples/util/zipf/zipf.o 00:02:01.478 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:01.478 CC test/app/jsoncat/jsoncat.o 00:02:01.478 CC examples/accel/perf/accel_perf.o 00:02:01.747 CC test/env/memory/memory_ut.o 00:02:01.747 CC test/event/app_repeat/app_repeat.o 00:02:01.747 CC examples/thread/thread/thread_ex.o 00:02:01.748 CC test/nvme/cuse/cuse.o 00:02:01.748 CC test/nvme/sgl/sgl.o 00:02:01.748 CC test/nvme/overhead/overhead.o 00:02:01.748 CC test/nvme/boot_partition/boot_partition.o 00:02:01.748 CC test/nvme/e2edp/nvme_dp.o 00:02:01.748 CC test/nvme/aer/aer.o 00:02:01.748 CC examples/blob/cli/blobcli.o 00:02:01.748 CC test/nvme/reset/reset.o 00:02:01.748 CC test/nvme/err_injection/err_injection.o 00:02:01.748 CC test/blobfs/mkfs/mkfs.o 00:02:01.748 CC test/nvme/reserve/reserve.o 00:02:01.748 CC test/bdev/bdevio/bdevio.o 00:02:01.748 CC test/nvme/startup/startup.o 00:02:01.748 CC examples/bdev/hello_world/hello_bdev.o 00:02:01.748 CC test/nvme/fused_ordering/fused_ordering.o 00:02:01.748 CC test/nvme/compliance/nvme_compliance.o 00:02:01.748 CC app/fio/nvme/fio_plugin.o 00:02:01.748 CC examples/bdev/bdevperf/bdevperf.o 00:02:01.748 CC examples/blob/hello_world/hello_blob.o 00:02:01.748 CC test/dma/test_dma/test_dma.o 00:02:01.748 CC test/nvme/connect_stress/connect_stress.o 00:02:01.748 CC test/app/bdev_svc/bdev_svc.o 00:02:01.748 CC test/nvme/simple_copy/simple_copy.o 00:02:01.748 CC test/accel/dif/dif.o 00:02:01.748 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:01.748 CC test/event/scheduler/scheduler.o 00:02:01.748 CC test/nvme/fdp/fdp.o 00:02:01.748 CC app/fio/bdev/fio_plugin.o 00:02:01.748 LINK spdk_lspci 00:02:02.010 LINK spdk_nvme_discover 00:02:02.010 LINK rpc_client_test 00:02:02.010 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:02.011 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:02.011 LINK interrupt_tgt 00:02:02.011 LINK nvmf_tgt 00:02:02.011 CC test/env/mem_callbacks/mem_callbacks.o 00:02:02.011 LINK histogram_perf 00:02:02.011 LINK lsvmd 00:02:02.011 CC test/lvol/esnap/esnap.o 00:02:02.011 LINK reactor 00:02:02.011 LINK vhost 00:02:02.011 LINK led 00:02:02.011 LINK jsoncat 00:02:02.011 LINK zipf 00:02:02.011 LINK poller_perf 00:02:02.011 LINK iscsi_tgt 00:02:02.011 LINK spdk_tgt 00:02:02.275 LINK spdk_trace_record 00:02:02.275 LINK cmb_copy 00:02:02.275 CXX test/cpp_headers/ioat_spec.o 00:02:02.275 LINK stub 00:02:02.275 CXX test/cpp_headers/iscsi_spec.o 00:02:02.275 CXX test/cpp_headers/json.o 00:02:02.275 LINK reactor_perf 00:02:02.275 CXX test/cpp_headers/jsonrpc.o 00:02:02.275 CXX test/cpp_headers/keyring.o 00:02:02.275 CXX test/cpp_headers/keyring_module.o 00:02:02.275 LINK event_perf 00:02:02.275 LINK vtophys 00:02:02.275 CXX test/cpp_headers/likely.o 00:02:02.275 LINK verify 00:02:02.275 CXX test/cpp_headers/log.o 00:02:02.275 LINK env_dpdk_post_init 00:02:02.275 CXX test/cpp_headers/lvol.o 00:02:02.275 LINK ioat_perf 00:02:02.275 CXX test/cpp_headers/memory.o 00:02:02.275 LINK startup 00:02:02.275 LINK app_repeat 00:02:02.275 CXX test/cpp_headers/mmio.o 00:02:02.275 CXX test/cpp_headers/nbd.o 00:02:02.275 LINK pmr_persistence 00:02:02.275 LINK hotplug 00:02:02.275 CXX test/cpp_headers/notify.o 00:02:02.275 CXX test/cpp_headers/nvme.o 00:02:02.275 LINK hello_sock 00:02:02.275 CXX test/cpp_headers/nvme_intel.o 00:02:02.275 CXX test/cpp_headers/nvme_ocssd.o 00:02:02.275 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:02.275 LINK bdev_svc 00:02:02.275 CXX test/cpp_headers/nvme_spec.o 00:02:02.275 LINK connect_stress 00:02:02.275 LINK reserve 00:02:02.275 CXX test/cpp_headers/nvme_zns.o 00:02:02.275 CXX test/cpp_headers/nvmf_cmd.o 00:02:02.275 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:02.275 LINK mkfs 00:02:02.275 CXX test/cpp_headers/nvmf.o 00:02:02.275 CXX test/cpp_headers/nvmf_spec.o 00:02:02.275 CXX test/cpp_headers/nvmf_transport.o 00:02:02.275 CXX test/cpp_headers/opal.o 00:02:02.275 CXX test/cpp_headers/opal_spec.o 00:02:02.275 CXX test/cpp_headers/pci_ids.o 00:02:02.275 CXX test/cpp_headers/pipe.o 00:02:02.275 LINK boot_partition 00:02:02.275 CXX test/cpp_headers/queue.o 00:02:02.275 CXX test/cpp_headers/reduce.o 00:02:02.275 CXX test/cpp_headers/rpc.o 00:02:02.275 CXX test/cpp_headers/scsi.o 00:02:02.275 CXX test/cpp_headers/scheduler.o 00:02:02.275 CXX test/cpp_headers/scsi_spec.o 00:02:02.275 LINK hello_bdev 00:02:02.275 CXX test/cpp_headers/sock.o 00:02:02.275 LINK hello_blob 00:02:02.275 LINK hello_world 00:02:02.275 CXX test/cpp_headers/stdinc.o 00:02:02.275 LINK scheduler 00:02:02.275 LINK doorbell_aers 00:02:02.275 LINK fused_ordering 00:02:02.275 CXX test/cpp_headers/string.o 00:02:02.275 LINK sgl 00:02:02.275 LINK err_injection 00:02:02.275 LINK nvme_dp 00:02:02.275 CXX test/cpp_headers/thread.o 00:02:02.275 LINK thread 00:02:02.275 LINK overhead 00:02:02.275 CXX test/cpp_headers/trace.o 00:02:02.275 LINK simple_copy 00:02:02.275 LINK aer 00:02:02.275 CXX test/cpp_headers/trace_parser.o 00:02:02.536 CXX test/cpp_headers/tree.o 00:02:02.536 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:02.536 LINK arbitration 00:02:02.536 CXX test/cpp_headers/ublk.o 00:02:02.536 LINK reset 00:02:02.536 CXX test/cpp_headers/util.o 00:02:02.536 LINK abort 00:02:02.536 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.536 LINK idxd_perf 00:02:02.536 CXX test/cpp_headers/uuid.o 00:02:02.536 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.536 CXX test/cpp_headers/version.o 00:02:02.536 LINK nvmf 00:02:02.536 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.536 CXX test/cpp_headers/vhost.o 00:02:02.536 CXX test/cpp_headers/vmd.o 00:02:02.536 CXX test/cpp_headers/xor.o 00:02:02.536 LINK dif 00:02:02.536 LINK reconnect 00:02:02.536 LINK spdk_dd 00:02:02.536 CXX test/cpp_headers/zipf.o 00:02:02.536 LINK spdk_trace 00:02:02.536 LINK bdevio 00:02:02.536 LINK nvme_compliance 00:02:02.536 LINK fdp 00:02:02.536 LINK test_dma 00:02:02.794 LINK accel_perf 00:02:02.794 LINK pci_ut 00:02:02.794 LINK blobcli 00:02:02.794 LINK nvme_manage 00:02:02.794 LINK spdk_nvme 00:02:02.794 LINK nvme_fuzz 00:02:02.794 LINK spdk_bdev 00:02:02.794 LINK spdk_nvme_identify 00:02:02.794 LINK spdk_nvme_perf 00:02:02.794 LINK mem_callbacks 00:02:03.054 LINK spdk_top 00:02:03.054 LINK vhost_fuzz 00:02:03.054 LINK bdevperf 00:02:03.054 LINK memory_ut 00:02:03.313 LINK cuse 00:02:03.573 LINK iscsi_fuzz 00:02:05.478 LINK esnap 00:02:05.737 00:02:05.737 real 0m47.992s 00:02:05.737 user 6m33.800s 00:02:05.737 sys 4m16.588s 00:02:05.737 12:03:34 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:05.737 12:03:34 make -- common/autotest_common.sh@10 -- $ set +x 00:02:05.737 ************************************ 00:02:05.737 END TEST make 00:02:05.737 ************************************ 00:02:05.996 12:03:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:05.996 12:03:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:05.996 12:03:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:05.996 12:03:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.996 12:03:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:05.996 12:03:34 -- pm/common@44 -- $ pid=1823366 00:02:05.996 12:03:34 -- pm/common@50 -- $ kill -TERM 1823366 00:02:05.996 12:03:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.996 12:03:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:05.996 12:03:34 -- pm/common@44 -- $ pid=1823368 00:02:05.996 12:03:34 -- pm/common@50 -- $ kill -TERM 1823368 00:02:05.996 12:03:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.996 12:03:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:05.996 12:03:34 -- pm/common@44 -- $ pid=1823370 00:02:05.996 12:03:34 -- pm/common@50 -- $ kill -TERM 1823370 00:02:05.996 12:03:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.996 12:03:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:05.996 12:03:34 -- pm/common@44 -- $ pid=1823396 00:02:05.996 12:03:34 -- pm/common@50 -- $ sudo -E kill -TERM 1823396 00:02:05.996 12:03:34 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:05.996 12:03:34 -- nvmf/common.sh@7 -- # uname -s 00:02:05.996 12:03:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:05.996 12:03:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:05.996 12:03:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:05.996 12:03:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:05.996 12:03:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:05.996 12:03:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:05.996 12:03:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:05.996 12:03:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:05.996 12:03:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:05.996 12:03:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:05.996 12:03:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:02:05.996 12:03:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:02:05.996 12:03:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:05.996 12:03:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:05.996 12:03:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:05.996 12:03:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:05.996 12:03:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:05.996 12:03:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:05.996 12:03:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.996 12:03:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.996 12:03:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.996 12:03:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.996 12:03:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.997 12:03:34 -- paths/export.sh@5 -- # export PATH 00:02:05.997 12:03:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.997 12:03:34 -- nvmf/common.sh@47 -- # : 0 00:02:05.997 12:03:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:05.997 12:03:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:05.997 12:03:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:05.997 12:03:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:05.997 12:03:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:05.997 12:03:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:05.997 12:03:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:05.997 12:03:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:05.997 12:03:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:05.997 12:03:34 -- spdk/autotest.sh@32 -- # uname -s 00:02:05.997 12:03:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:05.997 12:03:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:05.997 12:03:34 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:05.997 12:03:34 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:05.997 12:03:34 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:05.997 12:03:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:05.997 12:03:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:05.997 12:03:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:05.997 12:03:34 -- spdk/autotest.sh@48 -- # udevadm_pid=1884206 00:02:05.997 12:03:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:05.997 12:03:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:05.997 12:03:34 -- pm/common@17 -- # local monitor 00:02:05.997 12:03:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.997 12:03:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.997 12:03:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.997 12:03:34 -- pm/common@21 -- # date +%s 00:02:05.997 12:03:34 -- pm/common@21 -- # date +%s 00:02:05.997 12:03:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.997 12:03:34 -- pm/common@25 -- # sleep 1 00:02:05.997 12:03:34 -- pm/common@21 -- # date +%s 00:02:05.997 12:03:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715767414 00:02:05.997 12:03:34 -- pm/common@21 -- # date +%s 00:02:05.997 12:03:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715767414 00:02:05.997 12:03:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715767414 00:02:05.997 12:03:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715767414 00:02:05.997 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715767414_collect-cpu-load.pm.log 00:02:06.255 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715767414_collect-vmstat.pm.log 00:02:06.255 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715767414_collect-cpu-temp.pm.log 00:02:06.255 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715767414_collect-bmc-pm.bmc.pm.log 00:02:07.193 12:03:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:07.193 12:03:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:07.193 12:03:35 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:07.193 12:03:35 -- common/autotest_common.sh@10 -- # set +x 00:02:07.193 12:03:35 -- spdk/autotest.sh@59 -- # create_test_list 00:02:07.193 12:03:35 -- common/autotest_common.sh@745 -- # xtrace_disable 00:02:07.193 12:03:35 -- common/autotest_common.sh@10 -- # set +x 00:02:07.193 12:03:35 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:07.193 12:03:35 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.193 12:03:35 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.193 12:03:35 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.193 12:03:35 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.193 12:03:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:07.193 12:03:35 -- common/autotest_common.sh@1452 -- # uname 00:02:07.193 12:03:35 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:02:07.193 12:03:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:07.193 12:03:35 -- common/autotest_common.sh@1472 -- # uname 00:02:07.193 12:03:35 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:02:07.193 12:03:35 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:07.193 12:03:35 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:07.193 12:03:35 -- spdk/autotest.sh@72 -- # hash lcov 00:02:07.193 12:03:35 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:07.193 12:03:35 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:07.193 --rc lcov_branch_coverage=1 00:02:07.193 --rc lcov_function_coverage=1 00:02:07.193 --rc genhtml_branch_coverage=1 00:02:07.193 --rc genhtml_function_coverage=1 00:02:07.193 --rc genhtml_legend=1 00:02:07.193 --rc geninfo_all_blocks=1 00:02:07.193 ' 00:02:07.193 12:03:35 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:07.193 --rc lcov_branch_coverage=1 00:02:07.193 --rc lcov_function_coverage=1 00:02:07.193 --rc genhtml_branch_coverage=1 00:02:07.193 --rc genhtml_function_coverage=1 00:02:07.193 --rc genhtml_legend=1 00:02:07.193 --rc geninfo_all_blocks=1 00:02:07.193 ' 00:02:07.193 12:03:35 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:07.193 --rc lcov_branch_coverage=1 00:02:07.193 --rc lcov_function_coverage=1 00:02:07.193 --rc genhtml_branch_coverage=1 00:02:07.193 --rc genhtml_function_coverage=1 00:02:07.193 --rc genhtml_legend=1 00:02:07.193 --rc geninfo_all_blocks=1 00:02:07.193 --no-external' 00:02:07.193 12:03:35 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:07.193 --rc lcov_branch_coverage=1 00:02:07.193 --rc lcov_function_coverage=1 00:02:07.193 --rc genhtml_branch_coverage=1 00:02:07.193 --rc genhtml_function_coverage=1 00:02:07.193 --rc genhtml_legend=1 00:02:07.193 --rc geninfo_all_blocks=1 00:02:07.193 --no-external' 00:02:07.193 12:03:35 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:07.193 lcov: LCOV version 1.14 00:02:07.193 12:03:35 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:17.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:17.243 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:17.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:17.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:17.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:17.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:17.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:17.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:29.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:29.712 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:29.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:29.712 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:29.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:29.712 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:29.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:29.712 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:29.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:29.712 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:29.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:29.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:29.713 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:29.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:29.974 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:29.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:29.975 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:30.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:31.614 12:04:00 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:31.614 12:04:00 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:31.614 12:04:00 -- common/autotest_common.sh@10 -- # set +x 00:02:31.614 12:04:00 -- spdk/autotest.sh@91 -- # rm -f 00:02:31.614 12:04:00 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.906 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:34.906 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:35.165 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:35.165 12:04:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:35.165 12:04:03 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:35.165 12:04:03 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:35.165 12:04:03 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:35.165 12:04:03 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:35.165 12:04:03 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:35.165 12:04:03 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:35.165 12:04:03 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:35.165 12:04:03 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:35.165 12:04:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:35.165 12:04:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:35.165 12:04:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:35.165 12:04:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:35.165 12:04:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:35.165 12:04:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:35.165 No valid GPT data, bailing 00:02:35.165 12:04:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:35.165 12:04:03 -- scripts/common.sh@391 -- # pt= 00:02:35.165 12:04:03 -- scripts/common.sh@392 -- # return 1 00:02:35.165 12:04:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:35.165 1+0 records in 00:02:35.165 1+0 records out 00:02:35.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508056 s, 206 MB/s 00:02:35.165 12:04:03 -- spdk/autotest.sh@118 -- # sync 00:02:35.166 12:04:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:35.166 12:04:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:35.166 12:04:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:43.289 12:04:10 -- spdk/autotest.sh@124 -- # uname -s 00:02:43.289 12:04:10 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:43.289 12:04:10 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:43.289 12:04:10 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:43.289 12:04:10 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:43.289 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:02:43.289 ************************************ 00:02:43.289 START TEST setup.sh 00:02:43.289 ************************************ 00:02:43.289 12:04:10 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:43.289 * Looking for test storage... 00:02:43.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:43.289 12:04:10 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:43.289 12:04:10 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:43.289 12:04:10 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:43.289 12:04:10 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:43.289 12:04:10 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:43.289 12:04:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:43.289 ************************************ 00:02:43.289 START TEST acl 00:02:43.289 ************************************ 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:43.289 * Looking for test storage... 00:02:43.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:43.289 12:04:10 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.289 12:04:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:43.289 12:04:10 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:43.289 12:04:10 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:43.289 12:04:10 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:43.289 12:04:10 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:43.289 12:04:10 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:43.289 12:04:10 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.289 12:04:10 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:46.660 12:04:14 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:46.660 12:04:14 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:46.660 12:04:14 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:46.660 12:04:14 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.660 12:04:14 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:46.660 12:04:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 Hugepages 00:02:49.202 node hugesize free / total 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 00:02:49.202 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:49.202 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.462 12:04:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:49.462 12:04:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:49.462 12:04:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:49.462 12:04:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:49.462 12:04:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:49.462 12:04:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.462 12:04:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:49.462 12:04:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:49.462 12:04:17 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:49.462 12:04:17 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:49.462 12:04:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:49.462 ************************************ 00:02:49.462 START TEST denied 00:02:49.462 ************************************ 00:02:49.462 12:04:17 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:02:49.462 12:04:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:49.462 12:04:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:49.462 12:04:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:49.462 12:04:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.462 12:04:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:52.755 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.755 12:04:21 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.030 00:02:58.030 real 0m7.921s 00:02:58.030 user 0m2.418s 00:02:58.030 sys 0m4.850s 00:02:58.030 12:04:25 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:02:58.030 12:04:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:58.030 ************************************ 00:02:58.030 END TEST denied 00:02:58.030 ************************************ 00:02:58.030 12:04:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:58.030 12:04:25 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:58.030 12:04:25 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:58.030 12:04:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.030 ************************************ 00:02:58.030 START TEST allowed 00:02:58.030 ************************************ 00:02:58.030 12:04:25 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:02:58.030 12:04:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:58.030 12:04:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:58.030 12:04:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:58.030 12:04:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.030 12:04:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.249 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:02.249 12:04:30 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:02.249 12:04:30 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:02.249 12:04:30 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:02.249 12:04:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.249 12:04:30 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.448 00:03:06.448 real 0m8.465s 00:03:06.448 user 0m2.246s 00:03:06.448 sys 0m4.659s 00:03:06.448 12:04:34 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:06.448 12:04:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:06.448 ************************************ 00:03:06.448 END TEST allowed 00:03:06.448 ************************************ 00:03:06.448 00:03:06.448 real 0m23.510s 00:03:06.448 user 0m7.140s 00:03:06.448 sys 0m14.341s 00:03:06.448 12:04:34 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:06.448 12:04:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.448 ************************************ 00:03:06.448 END TEST acl 00:03:06.448 ************************************ 00:03:06.448 12:04:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.448 12:04:34 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:06.448 12:04:34 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:06.448 12:04:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.448 ************************************ 00:03:06.448 START TEST hugepages 00:03:06.448 ************************************ 00:03:06.448 12:04:34 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.448 * Looking for test storage... 00:03:06.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 37958576 kB' 'MemAvailable: 42632680 kB' 'Buffers: 3728 kB' 'Cached: 14251316 kB' 'SwapCached: 0 kB' 'Active: 10292232 kB' 'Inactive: 4456536 kB' 'Active(anon): 9725780 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497064 kB' 'Mapped: 229324 kB' 'Shmem: 9232056 kB' 'KReclaimable: 296508 kB' 'Slab: 934636 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 638128 kB' 'KernelStack: 22064 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439056 kB' 'Committed_AS: 11081252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216376 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.448 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.449 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:06.450 12:04:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:06.450 12:04:34 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:06.450 12:04:34 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:06.450 12:04:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.450 ************************************ 00:03:06.450 START TEST default_setup 00:03:06.450 ************************************ 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.450 12:04:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.737 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:09.737 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.685 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40113164 kB' 'MemAvailable: 44787268 kB' 'Buffers: 3728 kB' 'Cached: 14251444 kB' 'SwapCached: 0 kB' 'Active: 10309400 kB' 'Inactive: 4456536 kB' 'Active(anon): 9742948 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514232 kB' 'Mapped: 229964 kB' 'Shmem: 9232184 kB' 'KReclaimable: 296508 kB' 'Slab: 932616 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636108 kB' 'KernelStack: 22272 kB' 'PageTables: 9676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11101688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216664 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.685 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40111812 kB' 'MemAvailable: 44785916 kB' 'Buffers: 3728 kB' 'Cached: 14251448 kB' 'SwapCached: 0 kB' 'Active: 10314816 kB' 'Inactive: 4456536 kB' 'Active(anon): 9748364 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520004 kB' 'Mapped: 229880 kB' 'Shmem: 9232188 kB' 'KReclaimable: 296508 kB' 'Slab: 932584 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636076 kB' 'KernelStack: 22432 kB' 'PageTables: 9988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11106604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.688 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40116520 kB' 'MemAvailable: 44790624 kB' 'Buffers: 3728 kB' 'Cached: 14251460 kB' 'SwapCached: 0 kB' 'Active: 10309476 kB' 'Inactive: 4456536 kB' 'Active(anon): 9743024 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513560 kB' 'Mapped: 230240 kB' 'Shmem: 9232200 kB' 'KReclaimable: 296508 kB' 'Slab: 932608 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636100 kB' 'KernelStack: 22528 kB' 'PageTables: 10160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11114400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.689 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.690 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.691 nr_hugepages=1024 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.691 resv_hugepages=0 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.691 surplus_hugepages=0 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.691 anon_hugepages=0 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40116692 kB' 'MemAvailable: 44790796 kB' 'Buffers: 3728 kB' 'Cached: 14251460 kB' 'SwapCached: 0 kB' 'Active: 10311984 kB' 'Inactive: 4456536 kB' 'Active(anon): 9745532 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517104 kB' 'Mapped: 229872 kB' 'Shmem: 9232200 kB' 'KReclaimable: 296508 kB' 'Slab: 932512 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636004 kB' 'KernelStack: 22592 kB' 'PageTables: 10636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11104136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.691 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.692 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19223324 kB' 'MemUsed: 13415816 kB' 'SwapCached: 0 kB' 'Active: 6805668 kB' 'Inactive: 3369880 kB' 'Active(anon): 6516604 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3369880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9863456 kB' 'Mapped: 146624 kB' 'AnonPages: 315212 kB' 'Shmem: 6204512 kB' 'KernelStack: 12152 kB' 'PageTables: 5944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 504368 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 325576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.693 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:11.694 node0=1024 expecting 1024 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:11.694 00:03:11.694 real 0m5.349s 00:03:11.694 user 0m1.377s 00:03:11.694 sys 0m2.451s 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:11.694 12:04:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:11.694 ************************************ 00:03:11.694 END TEST default_setup 00:03:11.694 ************************************ 00:03:11.694 12:04:40 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:11.694 12:04:40 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:11.694 12:04:40 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:11.694 12:04:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.694 ************************************ 00:03:11.694 START TEST per_node_1G_alloc 00:03:11.694 ************************************ 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.694 12:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.991 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.991 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40108376 kB' 'MemAvailable: 44782480 kB' 'Buffers: 3728 kB' 'Cached: 14251592 kB' 'SwapCached: 0 kB' 'Active: 10307168 kB' 'Inactive: 4456536 kB' 'Active(anon): 9740716 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511720 kB' 'Mapped: 228804 kB' 'Shmem: 9232332 kB' 'KReclaimable: 296508 kB' 'Slab: 932324 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 635816 kB' 'KernelStack: 22048 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11090688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216616 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.991 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.992 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40108336 kB' 'MemAvailable: 44782440 kB' 'Buffers: 3728 kB' 'Cached: 14251592 kB' 'SwapCached: 0 kB' 'Active: 10310924 kB' 'Inactive: 4456536 kB' 'Active(anon): 9744472 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515448 kB' 'Mapped: 228804 kB' 'Shmem: 9232332 kB' 'KReclaimable: 296508 kB' 'Slab: 932324 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 635816 kB' 'KernelStack: 22128 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11093612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216632 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.993 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.994 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40104692 kB' 'MemAvailable: 44778796 kB' 'Buffers: 3728 kB' 'Cached: 14251596 kB' 'SwapCached: 0 kB' 'Active: 10311816 kB' 'Inactive: 4456536 kB' 'Active(anon): 9745364 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516252 kB' 'Mapped: 229208 kB' 'Shmem: 9232336 kB' 'KReclaimable: 296508 kB' 'Slab: 932272 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 635764 kB' 'KernelStack: 21904 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11093148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216508 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.995 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.996 nr_hugepages=1024 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.996 resv_hugepages=0 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.996 surplus_hugepages=0 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.996 anon_hugepages=0 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.996 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40103180 kB' 'MemAvailable: 44777284 kB' 'Buffers: 3728 kB' 'Cached: 14251596 kB' 'SwapCached: 0 kB' 'Active: 10307516 kB' 'Inactive: 4456536 kB' 'Active(anon): 9741064 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511984 kB' 'Mapped: 228724 kB' 'Shmem: 9232336 kB' 'KReclaimable: 296508 kB' 'Slab: 932180 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 635672 kB' 'KernelStack: 22000 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11101220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.997 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.998 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20259544 kB' 'MemUsed: 12379596 kB' 'SwapCached: 0 kB' 'Active: 6809472 kB' 'Inactive: 3369880 kB' 'Active(anon): 6520408 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3369880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9863472 kB' 'Mapped: 145548 kB' 'AnonPages: 319000 kB' 'Shmem: 6204528 kB' 'KernelStack: 11832 kB' 'PageTables: 4964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 504036 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 325244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.999 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 19836452 kB' 'MemUsed: 7819616 kB' 'SwapCached: 0 kB' 'Active: 3502444 kB' 'Inactive: 1086656 kB' 'Active(anon): 3225056 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 1086656 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4391936 kB' 'Mapped: 82876 kB' 'AnonPages: 197284 kB' 'Shmem: 3027892 kB' 'KernelStack: 10168 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117716 kB' 'Slab: 428116 kB' 'SReclaimable: 117716 kB' 'SUnreclaim: 310400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.000 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.001 node0=512 expecting 512 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.001 node1=512 expecting 512 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.001 00:03:15.001 real 0m3.355s 00:03:15.001 user 0m1.238s 00:03:15.001 sys 0m2.145s 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:15.001 12:04:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.001 ************************************ 00:03:15.001 END TEST per_node_1G_alloc 00:03:15.001 ************************************ 00:03:15.001 12:04:43 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:15.001 12:04:43 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:15.001 12:04:43 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:15.001 12:04:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.001 ************************************ 00:03:15.001 START TEST even_2G_alloc 00:03:15.001 ************************************ 00:03:15.001 12:04:43 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.002 12:04:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.293 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.293 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40109560 kB' 'MemAvailable: 44783664 kB' 'Buffers: 3728 kB' 'Cached: 14251756 kB' 'SwapCached: 0 kB' 'Active: 10308792 kB' 'Inactive: 4456536 kB' 'Active(anon): 9742340 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512680 kB' 'Mapped: 228916 kB' 'Shmem: 9232496 kB' 'KReclaimable: 296508 kB' 'Slab: 932884 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636376 kB' 'KernelStack: 22000 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11090224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.293 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.294 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40103888 kB' 'MemAvailable: 44777992 kB' 'Buffers: 3728 kB' 'Cached: 14251760 kB' 'SwapCached: 0 kB' 'Active: 10312476 kB' 'Inactive: 4456536 kB' 'Active(anon): 9746024 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516860 kB' 'Mapped: 229224 kB' 'Shmem: 9232500 kB' 'KReclaimable: 296508 kB' 'Slab: 932872 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636364 kB' 'KernelStack: 22032 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11093820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216476 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.295 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.296 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40107904 kB' 'MemAvailable: 44782008 kB' 'Buffers: 3728 kB' 'Cached: 14251776 kB' 'SwapCached: 0 kB' 'Active: 10308208 kB' 'Inactive: 4456536 kB' 'Active(anon): 9741756 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512628 kB' 'Mapped: 228720 kB' 'Shmem: 9232516 kB' 'KReclaimable: 296508 kB' 'Slab: 932872 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636364 kB' 'KernelStack: 22032 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11089500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216424 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.297 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.298 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.299 nr_hugepages=1024 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.299 resv_hugepages=0 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.299 surplus_hugepages=0 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.299 anon_hugepages=0 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.299 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.561 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40101448 kB' 'MemAvailable: 44775552 kB' 'Buffers: 3728 kB' 'Cached: 14251816 kB' 'SwapCached: 0 kB' 'Active: 10311848 kB' 'Inactive: 4456536 kB' 'Active(anon): 9745396 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516088 kB' 'Mapped: 229224 kB' 'Shmem: 9232556 kB' 'KReclaimable: 296508 kB' 'Slab: 932864 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636356 kB' 'KernelStack: 22000 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11093500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.562 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20267652 kB' 'MemUsed: 12371488 kB' 'SwapCached: 0 kB' 'Active: 6803744 kB' 'Inactive: 3369880 kB' 'Active(anon): 6514680 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3369880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9863484 kB' 'Mapped: 145564 kB' 'AnonPages: 313368 kB' 'Shmem: 6204540 kB' 'KernelStack: 11864 kB' 'PageTables: 5016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 504500 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 325708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.563 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.564 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 19844672 kB' 'MemUsed: 7811396 kB' 'SwapCached: 0 kB' 'Active: 3502852 kB' 'Inactive: 1086656 kB' 'Active(anon): 3225464 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 1086656 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4392084 kB' 'Mapped: 82744 kB' 'AnonPages: 197488 kB' 'Shmem: 3028040 kB' 'KernelStack: 10136 kB' 'PageTables: 3748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117716 kB' 'Slab: 428232 kB' 'SReclaimable: 117716 kB' 'SUnreclaim: 310516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.565 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.566 node0=512 expecting 512 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.566 node1=512 expecting 512 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.566 00:03:18.566 real 0m3.410s 00:03:18.566 user 0m1.300s 00:03:18.566 sys 0m2.121s 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:18.566 12:04:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.566 ************************************ 00:03:18.566 END TEST even_2G_alloc 00:03:18.566 ************************************ 00:03:18.566 12:04:46 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:18.566 12:04:46 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:18.566 12:04:46 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:18.566 12:04:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.566 ************************************ 00:03:18.566 START TEST odd_alloc 00:03:18.566 ************************************ 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.566 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.567 12:04:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.863 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.863 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40103004 kB' 'MemAvailable: 44777108 kB' 'Buffers: 3728 kB' 'Cached: 14251920 kB' 'SwapCached: 0 kB' 'Active: 10310748 kB' 'Inactive: 4456536 kB' 'Active(anon): 9744296 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515068 kB' 'Mapped: 228424 kB' 'Shmem: 9232660 kB' 'KReclaimable: 296508 kB' 'Slab: 933336 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636828 kB' 'KernelStack: 22016 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11088356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.863 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.864 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40103112 kB' 'MemAvailable: 44777216 kB' 'Buffers: 3728 kB' 'Cached: 14251924 kB' 'SwapCached: 0 kB' 'Active: 10310460 kB' 'Inactive: 4456536 kB' 'Active(anon): 9744008 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515320 kB' 'Mapped: 228320 kB' 'Shmem: 9232664 kB' 'KReclaimable: 296508 kB' 'Slab: 933296 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636788 kB' 'KernelStack: 22016 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11088004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.865 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.866 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40103364 kB' 'MemAvailable: 44777468 kB' 'Buffers: 3728 kB' 'Cached: 14251940 kB' 'SwapCached: 0 kB' 'Active: 10312508 kB' 'Inactive: 4456536 kB' 'Active(anon): 9746056 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517356 kB' 'Mapped: 228824 kB' 'Shmem: 9232680 kB' 'KReclaimable: 296508 kB' 'Slab: 933296 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636788 kB' 'KernelStack: 22016 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11091204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.867 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.868 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:21.869 nr_hugepages=1025 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.869 resv_hugepages=0 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.869 surplus_hugepages=0 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.869 anon_hugepages=0 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40099872 kB' 'MemAvailable: 44773976 kB' 'Buffers: 3728 kB' 'Cached: 14251940 kB' 'SwapCached: 0 kB' 'Active: 10315808 kB' 'Inactive: 4456536 kB' 'Active(anon): 9749356 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520624 kB' 'Mapped: 228824 kB' 'Shmem: 9232680 kB' 'KReclaimable: 296508 kB' 'Slab: 933232 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636724 kB' 'KernelStack: 21984 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11094536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216476 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.869 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.870 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20256112 kB' 'MemUsed: 12383028 kB' 'SwapCached: 0 kB' 'Active: 6806844 kB' 'Inactive: 3369880 kB' 'Active(anon): 6517780 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3369880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9863484 kB' 'Mapped: 145752 kB' 'AnonPages: 317008 kB' 'Shmem: 6204540 kB' 'KernelStack: 11864 kB' 'PageTables: 5036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 504904 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 326112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.871 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 19847092 kB' 'MemUsed: 7808976 kB' 'SwapCached: 0 kB' 'Active: 3504320 kB' 'Inactive: 1086656 kB' 'Active(anon): 3226932 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 1086656 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4392244 kB' 'Mapped: 83248 kB' 'AnonPages: 198948 kB' 'Shmem: 3028200 kB' 'KernelStack: 10184 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117716 kB' 'Slab: 428320 kB' 'SReclaimable: 117716 kB' 'SUnreclaim: 310604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.872 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:21.873 node0=512 expecting 513 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.873 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:21.873 node1=513 expecting 512 00:03:21.874 12:04:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:21.874 00:03:21.874 real 0m3.267s 00:03:21.874 user 0m1.150s 00:03:21.874 sys 0m2.119s 00:03:21.874 12:04:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:21.874 12:04:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.874 ************************************ 00:03:21.874 END TEST odd_alloc 00:03:21.874 ************************************ 00:03:21.874 12:04:50 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:21.874 12:04:50 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:21.874 12:04:50 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:21.874 12:04:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.874 ************************************ 00:03:21.874 START TEST custom_alloc 00:03:21.874 ************************************ 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.874 12:04:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.163 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.163 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.164 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.164 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.164 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.164 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.164 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39053932 kB' 'MemAvailable: 43728036 kB' 'Buffers: 3728 kB' 'Cached: 14252080 kB' 'SwapCached: 0 kB' 'Active: 10315184 kB' 'Inactive: 4456536 kB' 'Active(anon): 9748732 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518828 kB' 'Mapped: 229208 kB' 'Shmem: 9232820 kB' 'KReclaimable: 296508 kB' 'Slab: 932788 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636280 kB' 'KernelStack: 22096 kB' 'PageTables: 9504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11113084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216668 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.164 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.165 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39057660 kB' 'MemAvailable: 43731764 kB' 'Buffers: 3728 kB' 'Cached: 14252084 kB' 'SwapCached: 0 kB' 'Active: 10308640 kB' 'Inactive: 4456536 kB' 'Active(anon): 9742188 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512328 kB' 'Mapped: 228408 kB' 'Shmem: 9232824 kB' 'KReclaimable: 296508 kB' 'Slab: 932788 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636280 kB' 'KernelStack: 22208 kB' 'PageTables: 9504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11091224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216648 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.430 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.431 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39056796 kB' 'MemAvailable: 43730900 kB' 'Buffers: 3728 kB' 'Cached: 14252100 kB' 'SwapCached: 0 kB' 'Active: 10307584 kB' 'Inactive: 4456536 kB' 'Active(anon): 9741132 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511624 kB' 'Mapped: 228328 kB' 'Shmem: 9232840 kB' 'KReclaimable: 296508 kB' 'Slab: 932796 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636288 kB' 'KernelStack: 22176 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11091380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.432 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.433 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:25.434 nr_hugepages=1536 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.434 resv_hugepages=0 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.434 surplus_hugepages=0 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.434 anon_hugepages=0 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39056380 kB' 'MemAvailable: 43730484 kB' 'Buffers: 3728 kB' 'Cached: 14252120 kB' 'SwapCached: 0 kB' 'Active: 10307576 kB' 'Inactive: 4456536 kB' 'Active(anon): 9741124 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511632 kB' 'Mapped: 228328 kB' 'Shmem: 9232860 kB' 'KReclaimable: 296508 kB' 'Slab: 932796 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636288 kB' 'KernelStack: 22032 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11091404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.434 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.435 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20249176 kB' 'MemUsed: 12389964 kB' 'SwapCached: 0 kB' 'Active: 6804156 kB' 'Inactive: 3369880 kB' 'Active(anon): 6515092 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3369880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9863576 kB' 'Mapped: 145584 kB' 'AnonPages: 313728 kB' 'Shmem: 6204632 kB' 'KernelStack: 11864 kB' 'PageTables: 4948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 504428 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 325636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.436 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 18806604 kB' 'MemUsed: 8849464 kB' 'SwapCached: 0 kB' 'Active: 3503672 kB' 'Inactive: 1086656 kB' 'Active(anon): 3226284 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 1086656 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4392308 kB' 'Mapped: 82744 kB' 'AnonPages: 198196 kB' 'Shmem: 3028264 kB' 'KernelStack: 10312 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117716 kB' 'Slab: 428368 kB' 'SReclaimable: 117716 kB' 'SUnreclaim: 310652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.437 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.438 node0=512 expecting 512 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:25.438 node1=1024 expecting 1024 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:25.438 00:03:25.438 real 0m3.503s 00:03:25.438 user 0m1.292s 00:03:25.438 sys 0m2.216s 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:25.438 12:04:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.438 ************************************ 00:03:25.438 END TEST custom_alloc 00:03:25.438 ************************************ 00:03:25.438 12:04:53 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:25.439 12:04:53 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:25.439 12:04:53 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:25.439 12:04:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.439 ************************************ 00:03:25.439 START TEST no_shrink_alloc 00:03:25.439 ************************************ 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.439 12:04:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.738 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.738 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40110636 kB' 'MemAvailable: 44784740 kB' 'Buffers: 3728 kB' 'Cached: 14252244 kB' 'SwapCached: 0 kB' 'Active: 10307680 kB' 'Inactive: 4456536 kB' 'Active(anon): 9741228 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511064 kB' 'Mapped: 228456 kB' 'Shmem: 9232984 kB' 'KReclaimable: 296508 kB' 'Slab: 932908 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636400 kB' 'KernelStack: 21984 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11089608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.738 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.739 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40110888 kB' 'MemAvailable: 44784992 kB' 'Buffers: 3728 kB' 'Cached: 14252248 kB' 'SwapCached: 0 kB' 'Active: 10307428 kB' 'Inactive: 4456536 kB' 'Active(anon): 9740976 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511284 kB' 'Mapped: 228344 kB' 'Shmem: 9232988 kB' 'KReclaimable: 296508 kB' 'Slab: 932884 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636376 kB' 'KernelStack: 21968 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11089756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.740 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.741 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40111284 kB' 'MemAvailable: 44785388 kB' 'Buffers: 3728 kB' 'Cached: 14252268 kB' 'SwapCached: 0 kB' 'Active: 10307488 kB' 'Inactive: 4456536 kB' 'Active(anon): 9741036 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511288 kB' 'Mapped: 228344 kB' 'Shmem: 9233008 kB' 'KReclaimable: 296508 kB' 'Slab: 932884 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636376 kB' 'KernelStack: 21968 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11089784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.742 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.743 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.047 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.048 nr_hugepages=1024 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.048 resv_hugepages=0 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.048 surplus_hugepages=0 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.048 anon_hugepages=0 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40115628 kB' 'MemAvailable: 44789732 kB' 'Buffers: 3728 kB' 'Cached: 14252284 kB' 'SwapCached: 0 kB' 'Active: 10308080 kB' 'Inactive: 4456536 kB' 'Active(anon): 9741628 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511932 kB' 'Mapped: 228344 kB' 'Shmem: 9233024 kB' 'KReclaimable: 296508 kB' 'Slab: 932884 kB' 'SReclaimable: 296508 kB' 'SUnreclaim: 636376 kB' 'KernelStack: 22032 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11090160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.048 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.049 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19217300 kB' 'MemUsed: 13421840 kB' 'SwapCached: 0 kB' 'Active: 6804304 kB' 'Inactive: 3369880 kB' 'Active(anon): 6515240 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3369880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9863668 kB' 'Mapped: 145600 kB' 'AnonPages: 313784 kB' 'Shmem: 6204724 kB' 'KernelStack: 11896 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178792 kB' 'Slab: 504516 kB' 'SReclaimable: 178792 kB' 'SUnreclaim: 325724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.050 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.051 node0=1024 expecting 1024 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.051 12:04:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.353 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.353 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.353 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40122932 kB' 'MemAvailable: 44797020 kB' 'Buffers: 3728 kB' 'Cached: 14252384 kB' 'SwapCached: 0 kB' 'Active: 10311972 kB' 'Inactive: 4456536 kB' 'Active(anon): 9745520 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516036 kB' 'Mapped: 228360 kB' 'Shmem: 9233124 kB' 'KReclaimable: 296476 kB' 'Slab: 933284 kB' 'SReclaimable: 296476 kB' 'SUnreclaim: 636808 kB' 'KernelStack: 22000 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11090088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.354 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40123372 kB' 'MemAvailable: 44797460 kB' 'Buffers: 3728 kB' 'Cached: 14252400 kB' 'SwapCached: 0 kB' 'Active: 10312116 kB' 'Inactive: 4456536 kB' 'Active(anon): 9745664 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516184 kB' 'Mapped: 228348 kB' 'Shmem: 9233140 kB' 'KReclaimable: 296476 kB' 'Slab: 933248 kB' 'SReclaimable: 296476 kB' 'SUnreclaim: 636772 kB' 'KernelStack: 22016 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11090604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.356 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40123968 kB' 'MemAvailable: 44798056 kB' 'Buffers: 3728 kB' 'Cached: 14252416 kB' 'SwapCached: 0 kB' 'Active: 10311896 kB' 'Inactive: 4456536 kB' 'Active(anon): 9745444 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515924 kB' 'Mapped: 228348 kB' 'Shmem: 9233156 kB' 'KReclaimable: 296476 kB' 'Slab: 933248 kB' 'SReclaimable: 296476 kB' 'SUnreclaim: 636772 kB' 'KernelStack: 22000 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11090624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.359 nr_hugepages=1024 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.359 resv_hugepages=0 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.359 surplus_hugepages=0 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.359 anon_hugepages=0 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40124100 kB' 'MemAvailable: 44798188 kB' 'Buffers: 3728 kB' 'Cached: 14252440 kB' 'SwapCached: 0 kB' 'Active: 10312040 kB' 'Inactive: 4456536 kB' 'Active(anon): 9745588 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4456536 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516040 kB' 'Mapped: 228348 kB' 'Shmem: 9233180 kB' 'KReclaimable: 296476 kB' 'Slab: 933248 kB' 'SReclaimable: 296476 kB' 'SUnreclaim: 636772 kB' 'KernelStack: 22000 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11090648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 100800 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3206516 kB' 'DirectMap2M: 18499584 kB' 'DirectMap1G: 47185920 kB' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.359 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:32.360 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19206656 kB' 'MemUsed: 13432484 kB' 'SwapCached: 0 kB' 'Active: 6806548 kB' 'Inactive: 3369880 kB' 'Active(anon): 6517484 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 3369880 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9863668 kB' 'Mapped: 145604 kB' 'AnonPages: 316308 kB' 'Shmem: 6204724 kB' 'KernelStack: 11864 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 178760 kB' 'Slab: 504904 kB' 'SReclaimable: 178760 kB' 'SUnreclaim: 326144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.361 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.622 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.623 node0=1024 expecting 1024 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.623 00:03:32.623 real 0m6.938s 00:03:32.623 user 0m2.554s 00:03:32.623 sys 0m4.445s 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:32.623 12:05:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.623 ************************************ 00:03:32.623 END TEST no_shrink_alloc 00:03:32.623 ************************************ 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.623 12:05:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.623 00:03:32.623 real 0m26.509s 00:03:32.623 user 0m9.144s 00:03:32.623 sys 0m15.969s 00:03:32.623 12:05:00 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:32.623 12:05:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.623 ************************************ 00:03:32.623 END TEST hugepages 00:03:32.623 ************************************ 00:03:32.623 12:05:00 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.623 12:05:00 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:32.623 12:05:00 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:32.623 12:05:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.623 ************************************ 00:03:32.623 START TEST driver 00:03:32.623 ************************************ 00:03:32.623 12:05:01 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.623 * Looking for test storage... 00:03:32.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.623 12:05:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:32.623 12:05:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.623 12:05:01 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.900 12:05:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:37.900 12:05:05 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:37.900 12:05:05 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:37.900 12:05:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.900 ************************************ 00:03:37.900 START TEST guess_driver 00:03:37.900 ************************************ 00:03:37.900 12:05:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:03:37.900 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:37.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:37.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:37.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:37.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:37.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:37.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:37.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:37.901 Looking for driver=vfio-pci 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.901 12:05:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.437 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.438 12:05:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.385 12:05:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.385 12:05:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.385 12:05:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.385 12:05:10 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:42.385 12:05:10 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:42.385 12:05:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.385 12:05:10 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.663 00:03:47.663 real 0m9.632s 00:03:47.663 user 0m2.554s 00:03:47.663 sys 0m4.809s 00:03:47.663 12:05:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:47.663 12:05:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:47.663 ************************************ 00:03:47.663 END TEST guess_driver 00:03:47.663 ************************************ 00:03:47.663 00:03:47.663 real 0m14.247s 00:03:47.663 user 0m3.779s 00:03:47.663 sys 0m7.402s 00:03:47.663 12:05:15 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:47.663 12:05:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:47.663 ************************************ 00:03:47.663 END TEST driver 00:03:47.663 ************************************ 00:03:47.663 12:05:15 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:47.663 12:05:15 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:47.663 12:05:15 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:47.663 12:05:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:47.663 ************************************ 00:03:47.663 START TEST devices 00:03:47.663 ************************************ 00:03:47.663 12:05:15 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:47.663 * Looking for test storage... 00:03:47.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:47.663 12:05:15 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:47.663 12:05:15 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:47.663 12:05:15 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.663 12:05:15 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.956 12:05:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:50.956 12:05:19 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.956 No valid GPT data, bailing 00:03:50.956 12:05:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.956 12:05:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.956 12:05:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.956 12:05:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.956 12:05:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.956 12:05:19 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:50.956 12:05:19 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:50.956 12:05:19 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:50.957 12:05:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.957 ************************************ 00:03:50.957 START TEST nvme_mount 00:03:50.957 ************************************ 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.957 12:05:19 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.894 Creating new GPT entries in memory. 00:03:51.894 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.894 other utilities. 00:03:51.894 12:05:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.894 12:05:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.894 12:05:20 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.894 12:05:20 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.894 12:05:20 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:52.867 Creating new GPT entries in memory. 00:03:52.867 The operation has completed successfully. 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1919019 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.867 12:05:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.170 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.170 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.428 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.428 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.428 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.428 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.428 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:56.428 12:05:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:56.428 12:05:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.428 12:05:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.428 12:05:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.686 12:05:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:56.686 12:05:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.686 12:05:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.686 12:05:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.975 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.976 12:05:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.289 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.290 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.290 00:04:03.290 real 0m12.353s 00:04:03.290 user 0m3.490s 00:04:03.290 sys 0m6.733s 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:03.290 12:05:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:03.290 ************************************ 00:04:03.290 END TEST nvme_mount 00:04:03.290 ************************************ 00:04:03.290 12:05:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:03.290 12:05:31 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:03.290 12:05:31 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:03.290 12:05:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.290 ************************************ 00:04:03.290 START TEST dm_mount 00:04:03.290 ************************************ 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.290 12:05:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:04.230 Creating new GPT entries in memory. 00:04:04.230 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.230 other utilities. 00:04:04.230 12:05:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.230 12:05:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.231 12:05:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.231 12:05:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.231 12:05:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.167 Creating new GPT entries in memory. 00:04:05.167 The operation has completed successfully. 00:04:05.167 12:05:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.167 12:05:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.167 12:05:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.167 12:05:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.167 12:05:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:06.547 The operation has completed successfully. 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1923444 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.547 12:05:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.838 12:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.838 12:05:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:13.128 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:13.128 00:04:13.128 real 0m9.824s 00:04:13.128 user 0m2.413s 00:04:13.128 sys 0m4.496s 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:13.128 12:05:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.128 ************************************ 00:04:13.128 END TEST dm_mount 00:04:13.128 ************************************ 00:04:13.128 12:05:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:13.128 12:05:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:13.128 12:05:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.128 12:05:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.128 12:05:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:13.128 12:05:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.128 12:05:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.387 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:13.387 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:13.387 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:13.387 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:13.387 12:05:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:13.387 12:05:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.387 12:05:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:13.387 12:05:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.387 12:05:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:13.387 12:05:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.387 12:05:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:13.387 00:04:13.387 real 0m26.398s 00:04:13.387 user 0m7.230s 00:04:13.387 sys 0m14.000s 00:04:13.387 12:05:41 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:13.387 12:05:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.387 ************************************ 00:04:13.387 END TEST devices 00:04:13.387 ************************************ 00:04:13.387 00:04:13.387 real 1m31.130s 00:04:13.387 user 0m27.463s 00:04:13.387 sys 0m52.024s 00:04:13.387 12:05:41 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:13.387 12:05:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.387 ************************************ 00:04:13.387 END TEST setup.sh 00:04:13.387 ************************************ 00:04:13.387 12:05:41 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:16.701 Hugepages 00:04:16.701 node hugesize free / total 00:04:16.701 node0 1048576kB 0 / 0 00:04:16.701 node0 2048kB 2048 / 2048 00:04:16.701 node1 1048576kB 0 / 0 00:04:16.701 node1 2048kB 0 / 0 00:04:16.701 00:04:16.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.701 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:16.701 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:16.701 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:16.701 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:16.701 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:16.701 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:16.701 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:16.701 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:16.701 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:16.701 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:16.701 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:16.701 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:16.701 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:16.701 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:16.701 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:16.701 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:16.701 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:16.701 12:05:44 -- spdk/autotest.sh@130 -- # uname -s 00:04:16.701 12:05:45 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:16.701 12:05:45 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:16.701 12:05:45 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.234 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.234 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.494 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.399 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:21.399 12:05:49 -- common/autotest_common.sh@1529 -- # sleep 1 00:04:22.339 12:05:50 -- common/autotest_common.sh@1530 -- # bdfs=() 00:04:22.339 12:05:50 -- common/autotest_common.sh@1530 -- # local bdfs 00:04:22.339 12:05:50 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:04:22.339 12:05:50 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:04:22.339 12:05:50 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:22.339 12:05:50 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:22.339 12:05:50 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.339 12:05:50 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:22.339 12:05:50 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:22.339 12:05:50 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:22.339 12:05:50 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:04:22.339 12:05:50 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.633 Waiting for block devices as requested 00:04:25.633 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:25.633 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:25.893 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:25.893 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:25.893 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:26.153 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:26.153 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:26.153 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:26.412 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:26.412 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:26.412 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:26.672 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:26.672 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:26.672 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:26.932 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:26.932 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:26.932 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:27.192 12:05:55 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:04:27.192 12:05:55 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:27.192 12:05:55 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:04:27.192 12:05:55 -- common/autotest_common.sh@1499 -- # grep 0000:d8:00.0/nvme/nvme 00:04:27.192 12:05:55 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:27.192 12:05:55 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:27.192 12:05:55 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:27.192 12:05:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:04:27.192 12:05:55 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:04:27.192 12:05:55 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:04:27.192 12:05:55 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:04:27.192 12:05:55 -- common/autotest_common.sh@1542 -- # grep oacs 00:04:27.192 12:05:55 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:04:27.192 12:05:55 -- common/autotest_common.sh@1542 -- # oacs=' 0xe' 00:04:27.192 12:05:55 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:04:27.192 12:05:55 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:04:27.192 12:05:55 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:04:27.192 12:05:55 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:04:27.192 12:05:55 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:04:27.192 12:05:55 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:04:27.192 12:05:55 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:04:27.192 12:05:55 -- common/autotest_common.sh@1554 -- # continue 00:04:27.192 12:05:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:27.192 12:05:55 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:27.192 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:04:27.192 12:05:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:27.192 12:05:55 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:27.192 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:04:27.192 12:05:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.493 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.493 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:32.402 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.402 12:06:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:32.402 12:06:00 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:32.402 12:06:00 -- common/autotest_common.sh@10 -- # set +x 00:04:32.402 12:06:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:32.402 12:06:00 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:04:32.402 12:06:00 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:04:32.402 12:06:00 -- common/autotest_common.sh@1574 -- # bdfs=() 00:04:32.402 12:06:00 -- common/autotest_common.sh@1574 -- # local bdfs 00:04:32.402 12:06:00 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:04:32.402 12:06:00 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:32.402 12:06:00 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:32.402 12:06:00 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.402 12:06:00 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.402 12:06:00 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:32.402 12:06:00 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:32.402 12:06:00 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:04:32.402 12:06:00 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:04:32.402 12:06:00 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:32.402 12:06:00 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:04:32.402 12:06:00 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:32.402 12:06:00 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:04:32.402 12:06:00 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:d8:00.0 00:04:32.402 12:06:00 -- common/autotest_common.sh@1589 -- # [[ -z 0000:d8:00.0 ]] 00:04:32.402 12:06:00 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=1933258 00:04:32.402 12:06:00 -- common/autotest_common.sh@1595 -- # waitforlisten 1933258 00:04:32.402 12:06:00 -- common/autotest_common.sh@828 -- # '[' -z 1933258 ']' 00:04:32.402 12:06:00 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.402 12:06:00 -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:32.402 12:06:00 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.402 12:06:00 -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:32.402 12:06:00 -- common/autotest_common.sh@10 -- # set +x 00:04:32.402 12:06:00 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.402 [2024-05-15 12:06:00.814332] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:04:32.402 [2024-05-15 12:06:00.814385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933258 ] 00:04:32.402 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.402 [2024-05-15 12:06:00.885648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.662 [2024-05-15 12:06:00.960724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.231 12:06:01 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:33.231 12:06:01 -- common/autotest_common.sh@861 -- # return 0 00:04:33.231 12:06:01 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:04:33.231 12:06:01 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:04:33.231 12:06:01 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:36.561 nvme0n1 00:04:36.561 12:06:04 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:36.562 [2024-05-15 12:06:04.719806] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:36.562 request: 00:04:36.562 { 00:04:36.562 "nvme_ctrlr_name": "nvme0", 00:04:36.562 "password": "test", 00:04:36.562 "method": "bdev_nvme_opal_revert", 00:04:36.562 "req_id": 1 00:04:36.562 } 00:04:36.562 Got JSON-RPC error response 00:04:36.562 response: 00:04:36.562 { 00:04:36.562 "code": -32602, 00:04:36.562 "message": "Invalid parameters" 00:04:36.562 } 00:04:36.562 12:06:04 -- common/autotest_common.sh@1601 -- # true 00:04:36.562 12:06:04 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:04:36.562 12:06:04 -- common/autotest_common.sh@1605 -- # killprocess 1933258 00:04:36.562 12:06:04 -- common/autotest_common.sh@947 -- # '[' -z 1933258 ']' 00:04:36.562 12:06:04 -- common/autotest_common.sh@951 -- # kill -0 1933258 00:04:36.562 12:06:04 -- common/autotest_common.sh@952 -- # uname 00:04:36.562 12:06:04 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:36.562 12:06:04 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1933258 00:04:36.562 12:06:04 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:36.562 12:06:04 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:36.562 12:06:04 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1933258' 00:04:36.562 killing process with pid 1933258 00:04:36.562 12:06:04 -- common/autotest_common.sh@966 -- # kill 1933258 00:04:36.562 12:06:04 -- common/autotest_common.sh@971 -- # wait 1933258 00:04:39.093 12:06:07 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:39.093 12:06:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:39.093 12:06:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:39.093 12:06:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:39.093 12:06:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:39.093 12:06:07 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:39.093 12:06:07 -- common/autotest_common.sh@10 -- # set +x 00:04:39.094 12:06:07 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:39.094 12:06:07 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:39.094 12:06:07 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:39.094 12:06:07 -- common/autotest_common.sh@10 -- # set +x 00:04:39.094 ************************************ 00:04:39.094 START TEST env 00:04:39.094 ************************************ 00:04:39.094 12:06:07 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:39.094 * Looking for test storage... 00:04:39.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:39.094 12:06:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:39.094 12:06:07 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:39.094 12:06:07 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:39.094 12:06:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.094 ************************************ 00:04:39.094 START TEST env_memory 00:04:39.094 ************************************ 00:04:39.094 12:06:07 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:39.094 00:04:39.094 00:04:39.094 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.094 http://cunit.sourceforge.net/ 00:04:39.094 00:04:39.094 00:04:39.094 Suite: memory 00:04:39.094 Test: alloc and free memory map ...[2024-05-15 12:06:07.216506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:39.094 passed 00:04:39.094 Test: mem map translation ...[2024-05-15 12:06:07.235234] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:39.094 [2024-05-15 12:06:07.235248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:39.094 [2024-05-15 12:06:07.235286] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:39.094 [2024-05-15 12:06:07.235294] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:39.094 passed 00:04:39.094 Test: mem map registration ...[2024-05-15 12:06:07.271482] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:39.094 [2024-05-15 12:06:07.271496] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:39.094 passed 00:04:39.094 Test: mem map adjacent registrations ...passed 00:04:39.094 00:04:39.094 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.094 suites 1 1 n/a 0 0 00:04:39.094 tests 4 4 4 0 0 00:04:39.094 asserts 152 152 152 0 n/a 00:04:39.094 00:04:39.094 Elapsed time = 0.134 seconds 00:04:39.094 00:04:39.094 real 0m0.147s 00:04:39.094 user 0m0.136s 00:04:39.094 sys 0m0.011s 00:04:39.094 12:06:07 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:39.094 12:06:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:39.094 ************************************ 00:04:39.094 END TEST env_memory 00:04:39.094 ************************************ 00:04:39.094 12:06:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:39.094 12:06:07 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:39.094 12:06:07 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:39.094 12:06:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.094 ************************************ 00:04:39.094 START TEST env_vtophys 00:04:39.094 ************************************ 00:04:39.094 12:06:07 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:39.094 EAL: lib.eal log level changed from notice to debug 00:04:39.094 EAL: Detected lcore 0 as core 0 on socket 0 00:04:39.094 EAL: Detected lcore 1 as core 1 on socket 0 00:04:39.094 EAL: Detected lcore 2 as core 2 on socket 0 00:04:39.094 EAL: Detected lcore 3 as core 3 on socket 0 00:04:39.094 EAL: Detected lcore 4 as core 4 on socket 0 00:04:39.094 EAL: Detected lcore 5 as core 5 on socket 0 00:04:39.094 EAL: Detected lcore 6 as core 6 on socket 0 00:04:39.094 EAL: Detected lcore 7 as core 8 on socket 0 00:04:39.094 EAL: Detected lcore 8 as core 9 on socket 0 00:04:39.094 EAL: Detected lcore 9 as core 10 on socket 0 00:04:39.094 EAL: Detected lcore 10 as core 11 on socket 0 00:04:39.094 EAL: Detected lcore 11 as core 12 on socket 0 00:04:39.094 EAL: Detected lcore 12 as core 13 on socket 0 00:04:39.094 EAL: Detected lcore 13 as core 14 on socket 0 00:04:39.094 EAL: Detected lcore 14 as core 16 on socket 0 00:04:39.094 EAL: Detected lcore 15 as core 17 on socket 0 00:04:39.094 EAL: Detected lcore 16 as core 18 on socket 0 00:04:39.094 EAL: Detected lcore 17 as core 19 on socket 0 00:04:39.094 EAL: Detected lcore 18 as core 20 on socket 0 00:04:39.094 EAL: Detected lcore 19 as core 21 on socket 0 00:04:39.094 EAL: Detected lcore 20 as core 22 on socket 0 00:04:39.094 EAL: Detected lcore 21 as core 24 on socket 0 00:04:39.094 EAL: Detected lcore 22 as core 25 on socket 0 00:04:39.094 EAL: Detected lcore 23 as core 26 on socket 0 00:04:39.094 EAL: Detected lcore 24 as core 27 on socket 0 00:04:39.094 EAL: Detected lcore 25 as core 28 on socket 0 00:04:39.094 EAL: Detected lcore 26 as core 29 on socket 0 00:04:39.094 EAL: Detected lcore 27 as core 30 on socket 0 00:04:39.094 EAL: Detected lcore 28 as core 0 on socket 1 00:04:39.094 EAL: Detected lcore 29 as core 1 on socket 1 00:04:39.094 EAL: Detected lcore 30 as core 2 on socket 1 00:04:39.094 EAL: Detected lcore 31 as core 3 on socket 1 00:04:39.094 EAL: Detected lcore 32 as core 4 on socket 1 00:04:39.094 EAL: Detected lcore 33 as core 5 on socket 1 00:04:39.094 EAL: Detected lcore 34 as core 6 on socket 1 00:04:39.094 EAL: Detected lcore 35 as core 8 on socket 1 00:04:39.094 EAL: Detected lcore 36 as core 9 on socket 1 00:04:39.094 EAL: Detected lcore 37 as core 10 on socket 1 00:04:39.094 EAL: Detected lcore 38 as core 11 on socket 1 00:04:39.094 EAL: Detected lcore 39 as core 12 on socket 1 00:04:39.094 EAL: Detected lcore 40 as core 13 on socket 1 00:04:39.094 EAL: Detected lcore 41 as core 14 on socket 1 00:04:39.094 EAL: Detected lcore 42 as core 16 on socket 1 00:04:39.094 EAL: Detected lcore 43 as core 17 on socket 1 00:04:39.094 EAL: Detected lcore 44 as core 18 on socket 1 00:04:39.094 EAL: Detected lcore 45 as core 19 on socket 1 00:04:39.094 EAL: Detected lcore 46 as core 20 on socket 1 00:04:39.094 EAL: Detected lcore 47 as core 21 on socket 1 00:04:39.094 EAL: Detected lcore 48 as core 22 on socket 1 00:04:39.094 EAL: Detected lcore 49 as core 24 on socket 1 00:04:39.094 EAL: Detected lcore 50 as core 25 on socket 1 00:04:39.094 EAL: Detected lcore 51 as core 26 on socket 1 00:04:39.094 EAL: Detected lcore 52 as core 27 on socket 1 00:04:39.094 EAL: Detected lcore 53 as core 28 on socket 1 00:04:39.094 EAL: Detected lcore 54 as core 29 on socket 1 00:04:39.094 EAL: Detected lcore 55 as core 30 on socket 1 00:04:39.094 EAL: Detected lcore 56 as core 0 on socket 0 00:04:39.094 EAL: Detected lcore 57 as core 1 on socket 0 00:04:39.094 EAL: Detected lcore 58 as core 2 on socket 0 00:04:39.094 EAL: Detected lcore 59 as core 3 on socket 0 00:04:39.094 EAL: Detected lcore 60 as core 4 on socket 0 00:04:39.094 EAL: Detected lcore 61 as core 5 on socket 0 00:04:39.094 EAL: Detected lcore 62 as core 6 on socket 0 00:04:39.094 EAL: Detected lcore 63 as core 8 on socket 0 00:04:39.094 EAL: Detected lcore 64 as core 9 on socket 0 00:04:39.094 EAL: Detected lcore 65 as core 10 on socket 0 00:04:39.094 EAL: Detected lcore 66 as core 11 on socket 0 00:04:39.094 EAL: Detected lcore 67 as core 12 on socket 0 00:04:39.094 EAL: Detected lcore 68 as core 13 on socket 0 00:04:39.094 EAL: Detected lcore 69 as core 14 on socket 0 00:04:39.094 EAL: Detected lcore 70 as core 16 on socket 0 00:04:39.094 EAL: Detected lcore 71 as core 17 on socket 0 00:04:39.094 EAL: Detected lcore 72 as core 18 on socket 0 00:04:39.094 EAL: Detected lcore 73 as core 19 on socket 0 00:04:39.094 EAL: Detected lcore 74 as core 20 on socket 0 00:04:39.094 EAL: Detected lcore 75 as core 21 on socket 0 00:04:39.094 EAL: Detected lcore 76 as core 22 on socket 0 00:04:39.094 EAL: Detected lcore 77 as core 24 on socket 0 00:04:39.094 EAL: Detected lcore 78 as core 25 on socket 0 00:04:39.094 EAL: Detected lcore 79 as core 26 on socket 0 00:04:39.094 EAL: Detected lcore 80 as core 27 on socket 0 00:04:39.094 EAL: Detected lcore 81 as core 28 on socket 0 00:04:39.094 EAL: Detected lcore 82 as core 29 on socket 0 00:04:39.094 EAL: Detected lcore 83 as core 30 on socket 0 00:04:39.094 EAL: Detected lcore 84 as core 0 on socket 1 00:04:39.094 EAL: Detected lcore 85 as core 1 on socket 1 00:04:39.094 EAL: Detected lcore 86 as core 2 on socket 1 00:04:39.094 EAL: Detected lcore 87 as core 3 on socket 1 00:04:39.094 EAL: Detected lcore 88 as core 4 on socket 1 00:04:39.094 EAL: Detected lcore 89 as core 5 on socket 1 00:04:39.094 EAL: Detected lcore 90 as core 6 on socket 1 00:04:39.094 EAL: Detected lcore 91 as core 8 on socket 1 00:04:39.094 EAL: Detected lcore 92 as core 9 on socket 1 00:04:39.094 EAL: Detected lcore 93 as core 10 on socket 1 00:04:39.094 EAL: Detected lcore 94 as core 11 on socket 1 00:04:39.094 EAL: Detected lcore 95 as core 12 on socket 1 00:04:39.094 EAL: Detected lcore 96 as core 13 on socket 1 00:04:39.094 EAL: Detected lcore 97 as core 14 on socket 1 00:04:39.094 EAL: Detected lcore 98 as core 16 on socket 1 00:04:39.094 EAL: Detected lcore 99 as core 17 on socket 1 00:04:39.094 EAL: Detected lcore 100 as core 18 on socket 1 00:04:39.094 EAL: Detected lcore 101 as core 19 on socket 1 00:04:39.094 EAL: Detected lcore 102 as core 20 on socket 1 00:04:39.094 EAL: Detected lcore 103 as core 21 on socket 1 00:04:39.094 EAL: Detected lcore 104 as core 22 on socket 1 00:04:39.094 EAL: Detected lcore 105 as core 24 on socket 1 00:04:39.094 EAL: Detected lcore 106 as core 25 on socket 1 00:04:39.094 EAL: Detected lcore 107 as core 26 on socket 1 00:04:39.094 EAL: Detected lcore 108 as core 27 on socket 1 00:04:39.094 EAL: Detected lcore 109 as core 28 on socket 1 00:04:39.094 EAL: Detected lcore 110 as core 29 on socket 1 00:04:39.094 EAL: Detected lcore 111 as core 30 on socket 1 00:04:39.094 EAL: Maximum logical cores by configuration: 128 00:04:39.094 EAL: Detected CPU lcores: 112 00:04:39.094 EAL: Detected NUMA nodes: 2 00:04:39.095 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:39.095 EAL: Detected shared linkage of DPDK 00:04:39.095 EAL: No shared files mode enabled, IPC will be disabled 00:04:39.095 EAL: Bus pci wants IOVA as 'DC' 00:04:39.095 EAL: Buses did not request a specific IOVA mode. 00:04:39.095 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:39.095 EAL: Selected IOVA mode 'VA' 00:04:39.095 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.095 EAL: Probing VFIO support... 00:04:39.095 EAL: IOMMU type 1 (Type 1) is supported 00:04:39.095 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:39.095 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:39.095 EAL: VFIO support initialized 00:04:39.095 EAL: Ask a virtual area of 0x2e000 bytes 00:04:39.095 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:39.095 EAL: Setting up physically contiguous memory... 00:04:39.095 EAL: Setting maximum number of open files to 524288 00:04:39.095 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:39.095 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:39.095 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:39.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.095 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:39.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.095 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:39.095 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:39.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.095 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:39.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.095 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:39.095 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:39.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.095 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:39.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.095 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:39.095 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:39.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.095 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:39.095 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.095 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:39.095 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:39.095 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:39.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.095 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:39.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:39.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.095 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:39.095 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:39.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.095 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:39.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:39.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.095 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:39.095 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:39.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.095 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:39.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:39.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.095 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:39.095 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:39.095 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.095 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:39.095 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:39.095 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.095 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:39.095 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:39.095 EAL: Hugepages will be freed exactly as allocated. 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: TSC frequency is ~2500000 KHz 00:04:39.095 EAL: Main lcore 0 is ready (tid=7fa2c18ada00;cpuset=[0]) 00:04:39.095 EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.095 EAL: Restoring previous memory policy: 0 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was expanded by 2MB 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:39.095 EAL: Mem event callback 'spdk:(nil)' registered 00:04:39.095 00:04:39.095 00:04:39.095 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.095 http://cunit.sourceforge.net/ 00:04:39.095 00:04:39.095 00:04:39.095 Suite: components_suite 00:04:39.095 Test: vtophys_malloc_test ...passed 00:04:39.095 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.095 EAL: Restoring previous memory policy: 4 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was expanded by 4MB 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was shrunk by 4MB 00:04:39.095 EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.095 EAL: Restoring previous memory policy: 4 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was expanded by 6MB 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was shrunk by 6MB 00:04:39.095 EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.095 EAL: Restoring previous memory policy: 4 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was expanded by 10MB 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was shrunk by 10MB 00:04:39.095 EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.095 EAL: Restoring previous memory policy: 4 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was expanded by 18MB 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was shrunk by 18MB 00:04:39.095 EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.095 EAL: Restoring previous memory policy: 4 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was expanded by 34MB 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was shrunk by 34MB 00:04:39.095 EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.095 EAL: Restoring previous memory policy: 4 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was expanded by 66MB 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was shrunk by 66MB 00:04:39.095 EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.095 EAL: Restoring previous memory policy: 4 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was expanded by 130MB 00:04:39.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.095 EAL: request: mp_malloc_sync 00:04:39.095 EAL: No shared files mode enabled, IPC is disabled 00:04:39.095 EAL: Heap on socket 0 was shrunk by 130MB 00:04:39.095 EAL: Trying to obtain current memory policy. 00:04:39.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.354 EAL: Restoring previous memory policy: 4 00:04:39.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.354 EAL: request: mp_malloc_sync 00:04:39.354 EAL: No shared files mode enabled, IPC is disabled 00:04:39.354 EAL: Heap on socket 0 was expanded by 258MB 00:04:39.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.354 EAL: request: mp_malloc_sync 00:04:39.354 EAL: No shared files mode enabled, IPC is disabled 00:04:39.354 EAL: Heap on socket 0 was shrunk by 258MB 00:04:39.354 EAL: Trying to obtain current memory policy. 00:04:39.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.354 EAL: Restoring previous memory policy: 4 00:04:39.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.354 EAL: request: mp_malloc_sync 00:04:39.354 EAL: No shared files mode enabled, IPC is disabled 00:04:39.354 EAL: Heap on socket 0 was expanded by 514MB 00:04:39.613 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.613 EAL: request: mp_malloc_sync 00:04:39.613 EAL: No shared files mode enabled, IPC is disabled 00:04:39.613 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.613 EAL: Trying to obtain current memory policy. 00:04:39.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.872 EAL: Restoring previous memory policy: 4 00:04:39.872 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.872 EAL: request: mp_malloc_sync 00:04:39.872 EAL: No shared files mode enabled, IPC is disabled 00:04:39.872 EAL: Heap on socket 0 was expanded by 1026MB 00:04:39.872 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.131 EAL: request: mp_malloc_sync 00:04:40.131 EAL: No shared files mode enabled, IPC is disabled 00:04:40.131 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:40.131 passed 00:04:40.131 00:04:40.131 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.131 suites 1 1 n/a 0 0 00:04:40.131 tests 2 2 2 0 0 00:04:40.131 asserts 497 497 497 0 n/a 00:04:40.131 00:04:40.131 Elapsed time = 0.957 seconds 00:04:40.131 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.131 EAL: request: mp_malloc_sync 00:04:40.131 EAL: No shared files mode enabled, IPC is disabled 00:04:40.131 EAL: Heap on socket 0 was shrunk by 2MB 00:04:40.131 EAL: No shared files mode enabled, IPC is disabled 00:04:40.131 EAL: No shared files mode enabled, IPC is disabled 00:04:40.131 EAL: No shared files mode enabled, IPC is disabled 00:04:40.131 00:04:40.131 real 0m1.085s 00:04:40.131 user 0m0.626s 00:04:40.131 sys 0m0.428s 00:04:40.131 12:06:08 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:40.131 12:06:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:40.131 ************************************ 00:04:40.131 END TEST env_vtophys 00:04:40.131 ************************************ 00:04:40.131 12:06:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:40.131 12:06:08 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:40.131 12:06:08 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:40.131 12:06:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.131 ************************************ 00:04:40.131 START TEST env_pci 00:04:40.131 ************************************ 00:04:40.131 12:06:08 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:40.131 00:04:40.131 00:04:40.131 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.131 http://cunit.sourceforge.net/ 00:04:40.131 00:04:40.131 00:04:40.131 Suite: pci 00:04:40.131 Test: pci_hook ...[2024-05-15 12:06:08.575114] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1935265 has claimed it 00:04:40.131 EAL: Cannot find device (10000:00:01.0) 00:04:40.131 EAL: Failed to attach device on primary process 00:04:40.131 passed 00:04:40.131 00:04:40.131 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.131 suites 1 1 n/a 0 0 00:04:40.131 tests 1 1 1 0 0 00:04:40.131 asserts 25 25 25 0 n/a 00:04:40.131 00:04:40.131 Elapsed time = 0.025 seconds 00:04:40.131 00:04:40.131 real 0m0.040s 00:04:40.131 user 0m0.009s 00:04:40.131 sys 0m0.030s 00:04:40.131 12:06:08 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:40.131 12:06:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:40.131 ************************************ 00:04:40.131 END TEST env_pci 00:04:40.131 ************************************ 00:04:40.131 12:06:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:40.131 12:06:08 env -- env/env.sh@15 -- # uname 00:04:40.131 12:06:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:40.131 12:06:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:40.131 12:06:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:40.131 12:06:08 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:04:40.131 12:06:08 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:40.131 12:06:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.390 ************************************ 00:04:40.390 START TEST env_dpdk_post_init 00:04:40.390 ************************************ 00:04:40.390 12:06:08 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:40.390 EAL: Detected CPU lcores: 112 00:04:40.390 EAL: Detected NUMA nodes: 2 00:04:40.390 EAL: Detected shared linkage of DPDK 00:04:40.390 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:40.390 EAL: Selected IOVA mode 'VA' 00:04:40.390 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.391 EAL: VFIO support initialized 00:04:40.391 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:40.391 EAL: Using IOMMU type 1 (Type 1) 00:04:40.391 EAL: Ignore mapping IO port bar(1) 00:04:40.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:40.391 EAL: Ignore mapping IO port bar(1) 00:04:40.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:40.391 EAL: Ignore mapping IO port bar(1) 00:04:40.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:40.391 EAL: Ignore mapping IO port bar(1) 00:04:40.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:40.391 EAL: Ignore mapping IO port bar(1) 00:04:40.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:40.391 EAL: Ignore mapping IO port bar(1) 00:04:40.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:40.391 EAL: Ignore mapping IO port bar(1) 00:04:40.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:40.391 EAL: Ignore mapping IO port bar(1) 00:04:40.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:40.650 EAL: Ignore mapping IO port bar(1) 00:04:40.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:40.650 EAL: Ignore mapping IO port bar(1) 00:04:40.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:40.650 EAL: Ignore mapping IO port bar(1) 00:04:40.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:40.650 EAL: Ignore mapping IO port bar(1) 00:04:40.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:40.650 EAL: Ignore mapping IO port bar(1) 00:04:40.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:40.650 EAL: Ignore mapping IO port bar(1) 00:04:40.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:40.650 EAL: Ignore mapping IO port bar(1) 00:04:40.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:40.650 EAL: Ignore mapping IO port bar(1) 00:04:40.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:41.587 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:44.868 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:44.868 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:45.125 Starting DPDK initialization... 00:04:45.125 Starting SPDK post initialization... 00:04:45.125 SPDK NVMe probe 00:04:45.125 Attaching to 0000:d8:00.0 00:04:45.125 Attached to 0000:d8:00.0 00:04:45.125 Cleaning up... 00:04:45.125 00:04:45.125 real 0m4.961s 00:04:45.125 user 0m3.674s 00:04:45.125 sys 0m0.342s 00:04:45.125 12:06:13 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:45.125 12:06:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.125 ************************************ 00:04:45.125 END TEST env_dpdk_post_init 00:04:45.125 ************************************ 00:04:45.384 12:06:13 env -- env/env.sh@26 -- # uname 00:04:45.384 12:06:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:45.384 12:06:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:45.384 12:06:13 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.384 12:06:13 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.384 12:06:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.384 ************************************ 00:04:45.384 START TEST env_mem_callbacks 00:04:45.384 ************************************ 00:04:45.384 12:06:13 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:45.384 EAL: Detected CPU lcores: 112 00:04:45.384 EAL: Detected NUMA nodes: 2 00:04:45.384 EAL: Detected shared linkage of DPDK 00:04:45.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.384 EAL: Selected IOVA mode 'VA' 00:04:45.384 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.384 EAL: VFIO support initialized 00:04:45.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.384 00:04:45.384 00:04:45.384 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.384 http://cunit.sourceforge.net/ 00:04:45.384 00:04:45.384 00:04:45.384 Suite: memory 00:04:45.384 Test: test ... 00:04:45.384 register 0x200000200000 2097152 00:04:45.384 malloc 3145728 00:04:45.384 register 0x200000400000 4194304 00:04:45.384 buf 0x200000500000 len 3145728 PASSED 00:04:45.384 malloc 64 00:04:45.384 buf 0x2000004fff40 len 64 PASSED 00:04:45.384 malloc 4194304 00:04:45.384 register 0x200000800000 6291456 00:04:45.384 buf 0x200000a00000 len 4194304 PASSED 00:04:45.384 free 0x200000500000 3145728 00:04:45.384 free 0x2000004fff40 64 00:04:45.384 unregister 0x200000400000 4194304 PASSED 00:04:45.384 free 0x200000a00000 4194304 00:04:45.384 unregister 0x200000800000 6291456 PASSED 00:04:45.384 malloc 8388608 00:04:45.384 register 0x200000400000 10485760 00:04:45.384 buf 0x200000600000 len 8388608 PASSED 00:04:45.384 free 0x200000600000 8388608 00:04:45.384 unregister 0x200000400000 10485760 PASSED 00:04:45.384 passed 00:04:45.384 00:04:45.384 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.384 suites 1 1 n/a 0 0 00:04:45.384 tests 1 1 1 0 0 00:04:45.384 asserts 15 15 15 0 n/a 00:04:45.384 00:04:45.384 Elapsed time = 0.006 seconds 00:04:45.384 00:04:45.384 real 0m0.067s 00:04:45.384 user 0m0.023s 00:04:45.384 sys 0m0.042s 00:04:45.384 12:06:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:45.384 12:06:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:45.384 ************************************ 00:04:45.384 END TEST env_mem_callbacks 00:04:45.384 ************************************ 00:04:45.384 00:04:45.384 real 0m6.803s 00:04:45.384 user 0m4.640s 00:04:45.384 sys 0m1.195s 00:04:45.384 12:06:13 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:45.384 12:06:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.384 ************************************ 00:04:45.384 END TEST env 00:04:45.384 ************************************ 00:04:45.384 12:06:13 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:45.384 12:06:13 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.384 12:06:13 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.384 12:06:13 -- common/autotest_common.sh@10 -- # set +x 00:04:45.642 ************************************ 00:04:45.642 START TEST rpc 00:04:45.642 ************************************ 00:04:45.642 12:06:13 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:45.642 * Looking for test storage... 00:04:45.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:45.642 12:06:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1936243 00:04:45.642 12:06:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.642 12:06:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:45.642 12:06:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1936243 00:04:45.642 12:06:14 rpc -- common/autotest_common.sh@828 -- # '[' -z 1936243 ']' 00:04:45.642 12:06:14 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.642 12:06:14 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:45.642 12:06:14 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.642 12:06:14 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:45.642 12:06:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.642 [2024-05-15 12:06:14.092800] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:04:45.642 [2024-05-15 12:06:14.092847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1936243 ] 00:04:45.642 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.642 [2024-05-15 12:06:14.160144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.900 [2024-05-15 12:06:14.231877] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:45.900 [2024-05-15 12:06:14.231919] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1936243' to capture a snapshot of events at runtime. 00:04:45.900 [2024-05-15 12:06:14.231933] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:45.900 [2024-05-15 12:06:14.231943] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:45.900 [2024-05-15 12:06:14.231952] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1936243 for offline analysis/debug. 00:04:45.900 [2024-05-15 12:06:14.231982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.466 12:06:14 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:46.466 12:06:14 rpc -- common/autotest_common.sh@861 -- # return 0 00:04:46.466 12:06:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.466 12:06:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.466 12:06:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:46.466 12:06:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:46.466 12:06:14 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:46.466 12:06:14 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:46.466 12:06:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.466 ************************************ 00:04:46.466 START TEST rpc_integrity 00:04:46.466 ************************************ 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:46.466 12:06:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.466 12:06:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.466 12:06:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.466 12:06:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.466 12:06:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.466 12:06:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:46.466 12:06:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.466 12:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.724 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.724 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.724 { 00:04:46.724 "name": "Malloc0", 00:04:46.724 "aliases": [ 00:04:46.724 "87efea35-a316-4805-9e3f-6b7c8c40c09f" 00:04:46.724 ], 00:04:46.724 "product_name": "Malloc disk", 00:04:46.724 "block_size": 512, 00:04:46.724 "num_blocks": 16384, 00:04:46.724 "uuid": "87efea35-a316-4805-9e3f-6b7c8c40c09f", 00:04:46.724 "assigned_rate_limits": { 00:04:46.724 "rw_ios_per_sec": 0, 00:04:46.724 "rw_mbytes_per_sec": 0, 00:04:46.724 "r_mbytes_per_sec": 0, 00:04:46.724 "w_mbytes_per_sec": 0 00:04:46.724 }, 00:04:46.724 "claimed": false, 00:04:46.724 "zoned": false, 00:04:46.724 "supported_io_types": { 00:04:46.724 "read": true, 00:04:46.724 "write": true, 00:04:46.724 "unmap": true, 00:04:46.724 "write_zeroes": true, 00:04:46.724 "flush": true, 00:04:46.724 "reset": true, 00:04:46.724 "compare": false, 00:04:46.724 "compare_and_write": false, 00:04:46.724 "abort": true, 00:04:46.725 "nvme_admin": false, 00:04:46.725 "nvme_io": false 00:04:46.725 }, 00:04:46.725 "memory_domains": [ 00:04:46.725 { 00:04:46.725 "dma_device_id": "system", 00:04:46.725 "dma_device_type": 1 00:04:46.725 }, 00:04:46.725 { 00:04:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.725 "dma_device_type": 2 00:04:46.725 } 00:04:46.725 ], 00:04:46.725 "driver_specific": {} 00:04:46.725 } 00:04:46.725 ]' 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.725 [2024-05-15 12:06:15.056374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:46.725 [2024-05-15 12:06:15.056413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.725 [2024-05-15 12:06:15.056434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe90190 00:04:46.725 [2024-05-15 12:06:15.056447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.725 [2024-05-15 12:06:15.057540] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.725 [2024-05-15 12:06:15.057563] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.725 Passthru0 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.725 { 00:04:46.725 "name": "Malloc0", 00:04:46.725 "aliases": [ 00:04:46.725 "87efea35-a316-4805-9e3f-6b7c8c40c09f" 00:04:46.725 ], 00:04:46.725 "product_name": "Malloc disk", 00:04:46.725 "block_size": 512, 00:04:46.725 "num_blocks": 16384, 00:04:46.725 "uuid": "87efea35-a316-4805-9e3f-6b7c8c40c09f", 00:04:46.725 "assigned_rate_limits": { 00:04:46.725 "rw_ios_per_sec": 0, 00:04:46.725 "rw_mbytes_per_sec": 0, 00:04:46.725 "r_mbytes_per_sec": 0, 00:04:46.725 "w_mbytes_per_sec": 0 00:04:46.725 }, 00:04:46.725 "claimed": true, 00:04:46.725 "claim_type": "exclusive_write", 00:04:46.725 "zoned": false, 00:04:46.725 "supported_io_types": { 00:04:46.725 "read": true, 00:04:46.725 "write": true, 00:04:46.725 "unmap": true, 00:04:46.725 "write_zeroes": true, 00:04:46.725 "flush": true, 00:04:46.725 "reset": true, 00:04:46.725 "compare": false, 00:04:46.725 "compare_and_write": false, 00:04:46.725 "abort": true, 00:04:46.725 "nvme_admin": false, 00:04:46.725 "nvme_io": false 00:04:46.725 }, 00:04:46.725 "memory_domains": [ 00:04:46.725 { 00:04:46.725 "dma_device_id": "system", 00:04:46.725 "dma_device_type": 1 00:04:46.725 }, 00:04:46.725 { 00:04:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.725 "dma_device_type": 2 00:04:46.725 } 00:04:46.725 ], 00:04:46.725 "driver_specific": {} 00:04:46.725 }, 00:04:46.725 { 00:04:46.725 "name": "Passthru0", 00:04:46.725 "aliases": [ 00:04:46.725 "4a2d9a8c-9ade-5155-8e38-02f9b23f53d5" 00:04:46.725 ], 00:04:46.725 "product_name": "passthru", 00:04:46.725 "block_size": 512, 00:04:46.725 "num_blocks": 16384, 00:04:46.725 "uuid": "4a2d9a8c-9ade-5155-8e38-02f9b23f53d5", 00:04:46.725 "assigned_rate_limits": { 00:04:46.725 "rw_ios_per_sec": 0, 00:04:46.725 "rw_mbytes_per_sec": 0, 00:04:46.725 "r_mbytes_per_sec": 0, 00:04:46.725 "w_mbytes_per_sec": 0 00:04:46.725 }, 00:04:46.725 "claimed": false, 00:04:46.725 "zoned": false, 00:04:46.725 "supported_io_types": { 00:04:46.725 "read": true, 00:04:46.725 "write": true, 00:04:46.725 "unmap": true, 00:04:46.725 "write_zeroes": true, 00:04:46.725 "flush": true, 00:04:46.725 "reset": true, 00:04:46.725 "compare": false, 00:04:46.725 "compare_and_write": false, 00:04:46.725 "abort": true, 00:04:46.725 "nvme_admin": false, 00:04:46.725 "nvme_io": false 00:04:46.725 }, 00:04:46.725 "memory_domains": [ 00:04:46.725 { 00:04:46.725 "dma_device_id": "system", 00:04:46.725 "dma_device_type": 1 00:04:46.725 }, 00:04:46.725 { 00:04:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.725 "dma_device_type": 2 00:04:46.725 } 00:04:46.725 ], 00:04:46.725 "driver_specific": { 00:04:46.725 "passthru": { 00:04:46.725 "name": "Passthru0", 00:04:46.725 "base_bdev_name": "Malloc0" 00:04:46.725 } 00:04:46.725 } 00:04:46.725 } 00:04:46.725 ]' 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.725 12:06:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.725 00:04:46.725 real 0m0.274s 00:04:46.725 user 0m0.166s 00:04:46.725 sys 0m0.043s 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:46.725 12:06:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.725 ************************************ 00:04:46.725 END TEST rpc_integrity 00:04:46.725 ************************************ 00:04:46.725 12:06:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.725 12:06:15 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:46.725 12:06:15 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:46.725 12:06:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.984 ************************************ 00:04:46.984 START TEST rpc_plugins 00:04:46.984 ************************************ 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.984 { 00:04:46.984 "name": "Malloc1", 00:04:46.984 "aliases": [ 00:04:46.984 "51dd4484-7560-449e-9c6f-2a6ec9cad4dd" 00:04:46.984 ], 00:04:46.984 "product_name": "Malloc disk", 00:04:46.984 "block_size": 4096, 00:04:46.984 "num_blocks": 256, 00:04:46.984 "uuid": "51dd4484-7560-449e-9c6f-2a6ec9cad4dd", 00:04:46.984 "assigned_rate_limits": { 00:04:46.984 "rw_ios_per_sec": 0, 00:04:46.984 "rw_mbytes_per_sec": 0, 00:04:46.984 "r_mbytes_per_sec": 0, 00:04:46.984 "w_mbytes_per_sec": 0 00:04:46.984 }, 00:04:46.984 "claimed": false, 00:04:46.984 "zoned": false, 00:04:46.984 "supported_io_types": { 00:04:46.984 "read": true, 00:04:46.984 "write": true, 00:04:46.984 "unmap": true, 00:04:46.984 "write_zeroes": true, 00:04:46.984 "flush": true, 00:04:46.984 "reset": true, 00:04:46.984 "compare": false, 00:04:46.984 "compare_and_write": false, 00:04:46.984 "abort": true, 00:04:46.984 "nvme_admin": false, 00:04:46.984 "nvme_io": false 00:04:46.984 }, 00:04:46.984 "memory_domains": [ 00:04:46.984 { 00:04:46.984 "dma_device_id": "system", 00:04:46.984 "dma_device_type": 1 00:04:46.984 }, 00:04:46.984 { 00:04:46.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.984 "dma_device_type": 2 00:04:46.984 } 00:04:46.984 ], 00:04:46.984 "driver_specific": {} 00:04:46.984 } 00:04:46.984 ]' 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:46.984 12:06:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:46.984 00:04:46.984 real 0m0.145s 00:04:46.984 user 0m0.086s 00:04:46.984 sys 0m0.027s 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:46.984 12:06:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.984 ************************************ 00:04:46.984 END TEST rpc_plugins 00:04:46.984 ************************************ 00:04:46.984 12:06:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:46.984 12:06:15 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:46.984 12:06:15 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:46.984 12:06:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.242 ************************************ 00:04:47.242 START TEST rpc_trace_cmd_test 00:04:47.242 ************************************ 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:47.242 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1936243", 00:04:47.242 "tpoint_group_mask": "0x8", 00:04:47.242 "iscsi_conn": { 00:04:47.242 "mask": "0x2", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "scsi": { 00:04:47.242 "mask": "0x4", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "bdev": { 00:04:47.242 "mask": "0x8", 00:04:47.242 "tpoint_mask": "0xffffffffffffffff" 00:04:47.242 }, 00:04:47.242 "nvmf_rdma": { 00:04:47.242 "mask": "0x10", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "nvmf_tcp": { 00:04:47.242 "mask": "0x20", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "ftl": { 00:04:47.242 "mask": "0x40", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "blobfs": { 00:04:47.242 "mask": "0x80", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "dsa": { 00:04:47.242 "mask": "0x200", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "thread": { 00:04:47.242 "mask": "0x400", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "nvme_pcie": { 00:04:47.242 "mask": "0x800", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "iaa": { 00:04:47.242 "mask": "0x1000", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "nvme_tcp": { 00:04:47.242 "mask": "0x2000", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "bdev_nvme": { 00:04:47.242 "mask": "0x4000", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 }, 00:04:47.242 "sock": { 00:04:47.242 "mask": "0x8000", 00:04:47.242 "tpoint_mask": "0x0" 00:04:47.242 } 00:04:47.242 }' 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:47.242 00:04:47.242 real 0m0.229s 00:04:47.242 user 0m0.180s 00:04:47.242 sys 0m0.040s 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:47.242 12:06:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.242 ************************************ 00:04:47.242 END TEST rpc_trace_cmd_test 00:04:47.242 ************************************ 00:04:47.500 12:06:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:47.500 12:06:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:47.501 12:06:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:47.501 12:06:15 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:47.501 12:06:15 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:47.501 12:06:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.501 ************************************ 00:04:47.501 START TEST rpc_daemon_integrity 00:04:47.501 ************************************ 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.501 { 00:04:47.501 "name": "Malloc2", 00:04:47.501 "aliases": [ 00:04:47.501 "bf8bd5bd-8b01-4a8a-9fdd-124ffdf1db66" 00:04:47.501 ], 00:04:47.501 "product_name": "Malloc disk", 00:04:47.501 "block_size": 512, 00:04:47.501 "num_blocks": 16384, 00:04:47.501 "uuid": "bf8bd5bd-8b01-4a8a-9fdd-124ffdf1db66", 00:04:47.501 "assigned_rate_limits": { 00:04:47.501 "rw_ios_per_sec": 0, 00:04:47.501 "rw_mbytes_per_sec": 0, 00:04:47.501 "r_mbytes_per_sec": 0, 00:04:47.501 "w_mbytes_per_sec": 0 00:04:47.501 }, 00:04:47.501 "claimed": false, 00:04:47.501 "zoned": false, 00:04:47.501 "supported_io_types": { 00:04:47.501 "read": true, 00:04:47.501 "write": true, 00:04:47.501 "unmap": true, 00:04:47.501 "write_zeroes": true, 00:04:47.501 "flush": true, 00:04:47.501 "reset": true, 00:04:47.501 "compare": false, 00:04:47.501 "compare_and_write": false, 00:04:47.501 "abort": true, 00:04:47.501 "nvme_admin": false, 00:04:47.501 "nvme_io": false 00:04:47.501 }, 00:04:47.501 "memory_domains": [ 00:04:47.501 { 00:04:47.501 "dma_device_id": "system", 00:04:47.501 "dma_device_type": 1 00:04:47.501 }, 00:04:47.501 { 00:04:47.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.501 "dma_device_type": 2 00:04:47.501 } 00:04:47.501 ], 00:04:47.501 "driver_specific": {} 00:04:47.501 } 00:04:47.501 ]' 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.501 [2024-05-15 12:06:15.974854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:47.501 [2024-05-15 12:06:15.974884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.501 [2024-05-15 12:06:15.974903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1029080 00:04:47.501 [2024-05-15 12:06:15.974917] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.501 [2024-05-15 12:06:15.975883] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.501 [2024-05-15 12:06:15.975907] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.501 Passthru0 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.501 12:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.501 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.501 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.501 { 00:04:47.501 "name": "Malloc2", 00:04:47.501 "aliases": [ 00:04:47.501 "bf8bd5bd-8b01-4a8a-9fdd-124ffdf1db66" 00:04:47.501 ], 00:04:47.501 "product_name": "Malloc disk", 00:04:47.501 "block_size": 512, 00:04:47.501 "num_blocks": 16384, 00:04:47.501 "uuid": "bf8bd5bd-8b01-4a8a-9fdd-124ffdf1db66", 00:04:47.501 "assigned_rate_limits": { 00:04:47.501 "rw_ios_per_sec": 0, 00:04:47.501 "rw_mbytes_per_sec": 0, 00:04:47.501 "r_mbytes_per_sec": 0, 00:04:47.501 "w_mbytes_per_sec": 0 00:04:47.501 }, 00:04:47.501 "claimed": true, 00:04:47.501 "claim_type": "exclusive_write", 00:04:47.501 "zoned": false, 00:04:47.501 "supported_io_types": { 00:04:47.501 "read": true, 00:04:47.501 "write": true, 00:04:47.501 "unmap": true, 00:04:47.501 "write_zeroes": true, 00:04:47.501 "flush": true, 00:04:47.501 "reset": true, 00:04:47.501 "compare": false, 00:04:47.501 "compare_and_write": false, 00:04:47.501 "abort": true, 00:04:47.501 "nvme_admin": false, 00:04:47.501 "nvme_io": false 00:04:47.501 }, 00:04:47.501 "memory_domains": [ 00:04:47.501 { 00:04:47.501 "dma_device_id": "system", 00:04:47.501 "dma_device_type": 1 00:04:47.501 }, 00:04:47.501 { 00:04:47.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.501 "dma_device_type": 2 00:04:47.501 } 00:04:47.501 ], 00:04:47.501 "driver_specific": {} 00:04:47.501 }, 00:04:47.501 { 00:04:47.501 "name": "Passthru0", 00:04:47.501 "aliases": [ 00:04:47.501 "464f6b1c-3979-57e6-af5b-032a65c7ee38" 00:04:47.501 ], 00:04:47.501 "product_name": "passthru", 00:04:47.501 "block_size": 512, 00:04:47.501 "num_blocks": 16384, 00:04:47.501 "uuid": "464f6b1c-3979-57e6-af5b-032a65c7ee38", 00:04:47.501 "assigned_rate_limits": { 00:04:47.501 "rw_ios_per_sec": 0, 00:04:47.501 "rw_mbytes_per_sec": 0, 00:04:47.501 "r_mbytes_per_sec": 0, 00:04:47.501 "w_mbytes_per_sec": 0 00:04:47.501 }, 00:04:47.501 "claimed": false, 00:04:47.501 "zoned": false, 00:04:47.501 "supported_io_types": { 00:04:47.501 "read": true, 00:04:47.501 "write": true, 00:04:47.501 "unmap": true, 00:04:47.501 "write_zeroes": true, 00:04:47.501 "flush": true, 00:04:47.501 "reset": true, 00:04:47.501 "compare": false, 00:04:47.501 "compare_and_write": false, 00:04:47.501 "abort": true, 00:04:47.501 "nvme_admin": false, 00:04:47.501 "nvme_io": false 00:04:47.501 }, 00:04:47.501 "memory_domains": [ 00:04:47.501 { 00:04:47.501 "dma_device_id": "system", 00:04:47.501 "dma_device_type": 1 00:04:47.501 }, 00:04:47.501 { 00:04:47.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.501 "dma_device_type": 2 00:04:47.501 } 00:04:47.501 ], 00:04:47.501 "driver_specific": { 00:04:47.501 "passthru": { 00:04:47.501 "name": "Passthru0", 00:04:47.501 "base_bdev_name": "Malloc2" 00:04:47.501 } 00:04:47.501 } 00:04:47.501 } 00:04:47.501 ]' 00:04:47.501 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:47.760 00:04:47.760 real 0m0.283s 00:04:47.760 user 0m0.164s 00:04:47.760 sys 0m0.056s 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:47.760 12:06:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.760 ************************************ 00:04:47.760 END TEST rpc_daemon_integrity 00:04:47.760 ************************************ 00:04:47.760 12:06:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:47.760 12:06:16 rpc -- rpc/rpc.sh@84 -- # killprocess 1936243 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@947 -- # '[' -z 1936243 ']' 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@951 -- # kill -0 1936243 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@952 -- # uname 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1936243 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1936243' 00:04:47.760 killing process with pid 1936243 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@966 -- # kill 1936243 00:04:47.760 12:06:16 rpc -- common/autotest_common.sh@971 -- # wait 1936243 00:04:48.020 00:04:48.020 real 0m2.619s 00:04:48.020 user 0m3.275s 00:04:48.020 sys 0m0.846s 00:04:48.020 12:06:16 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:48.020 12:06:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.020 ************************************ 00:04:48.020 END TEST rpc 00:04:48.020 ************************************ 00:04:48.279 12:06:16 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:48.279 12:06:16 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:48.279 12:06:16 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:48.279 12:06:16 -- common/autotest_common.sh@10 -- # set +x 00:04:48.279 ************************************ 00:04:48.279 START TEST skip_rpc 00:04:48.279 ************************************ 00:04:48.279 12:06:16 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:48.279 * Looking for test storage... 00:04:48.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:48.279 12:06:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.279 12:06:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:48.279 12:06:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:48.279 12:06:16 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:48.279 12:06:16 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:48.279 12:06:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.279 ************************************ 00:04:48.279 START TEST skip_rpc 00:04:48.279 ************************************ 00:04:48.279 12:06:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:04:48.279 12:06:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1936945 00:04:48.279 12:06:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.279 12:06:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:48.279 12:06:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:48.538 [2024-05-15 12:06:16.823482] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:04:48.538 [2024-05-15 12:06:16.823526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1936945 ] 00:04:48.538 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.538 [2024-05-15 12:06:16.889858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.538 [2024-05-15 12:06:16.958075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:53.843 12:06:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1936945 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 1936945 ']' 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 1936945 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1936945 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1936945' 00:04:53.844 killing process with pid 1936945 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 1936945 00:04:53.844 12:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 1936945 00:04:53.844 00:04:53.844 real 0m5.392s 00:04:53.844 user 0m5.163s 00:04:53.844 sys 0m0.266s 00:04:53.844 12:06:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.844 12:06:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 ************************************ 00:04:53.844 END TEST skip_rpc 00:04:53.844 ************************************ 00:04:53.844 12:06:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:53.844 12:06:22 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:53.844 12:06:22 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.844 12:06:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 ************************************ 00:04:53.844 START TEST skip_rpc_with_json 00:04:53.844 ************************************ 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1937925 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1937925 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 1937925 ']' 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.844 12:06:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.844 [2024-05-15 12:06:22.297488] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:04:53.844 [2024-05-15 12:06:22.297535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937925 ] 00:04:53.844 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.844 [2024-05-15 12:06:22.367603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.102 [2024-05-15 12:06:22.442543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.669 [2024-05-15 12:06:23.091094] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:54.669 request: 00:04:54.669 { 00:04:54.669 "trtype": "tcp", 00:04:54.669 "method": "nvmf_get_transports", 00:04:54.669 "req_id": 1 00:04:54.669 } 00:04:54.669 Got JSON-RPC error response 00:04:54.669 response: 00:04:54.669 { 00:04:54.669 "code": -19, 00:04:54.669 "message": "No such device" 00:04:54.669 } 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.669 [2024-05-15 12:06:23.099183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.669 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.928 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.928 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:54.928 { 00:04:54.928 "subsystems": [ 00:04:54.928 { 00:04:54.928 "subsystem": "vfio_user_target", 00:04:54.928 "config": null 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "keyring", 00:04:54.928 "config": [] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "iobuf", 00:04:54.928 "config": [ 00:04:54.928 { 00:04:54.928 "method": "iobuf_set_options", 00:04:54.928 "params": { 00:04:54.928 "small_pool_count": 8192, 00:04:54.928 "large_pool_count": 1024, 00:04:54.928 "small_bufsize": 8192, 00:04:54.928 "large_bufsize": 135168 00:04:54.928 } 00:04:54.928 } 00:04:54.928 ] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "sock", 00:04:54.928 "config": [ 00:04:54.928 { 00:04:54.928 "method": "sock_impl_set_options", 00:04:54.928 "params": { 00:04:54.928 "impl_name": "posix", 00:04:54.928 "recv_buf_size": 2097152, 00:04:54.928 "send_buf_size": 2097152, 00:04:54.928 "enable_recv_pipe": true, 00:04:54.928 "enable_quickack": false, 00:04:54.928 "enable_placement_id": 0, 00:04:54.928 "enable_zerocopy_send_server": true, 00:04:54.928 "enable_zerocopy_send_client": false, 00:04:54.928 "zerocopy_threshold": 0, 00:04:54.928 "tls_version": 0, 00:04:54.928 "enable_ktls": false 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "sock_impl_set_options", 00:04:54.928 "params": { 00:04:54.928 "impl_name": "ssl", 00:04:54.928 "recv_buf_size": 4096, 00:04:54.928 "send_buf_size": 4096, 00:04:54.928 "enable_recv_pipe": true, 00:04:54.928 "enable_quickack": false, 00:04:54.928 "enable_placement_id": 0, 00:04:54.928 "enable_zerocopy_send_server": true, 00:04:54.928 "enable_zerocopy_send_client": false, 00:04:54.928 "zerocopy_threshold": 0, 00:04:54.928 "tls_version": 0, 00:04:54.928 "enable_ktls": false 00:04:54.928 } 00:04:54.928 } 00:04:54.928 ] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "vmd", 00:04:54.928 "config": [] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "accel", 00:04:54.928 "config": [ 00:04:54.928 { 00:04:54.928 "method": "accel_set_options", 00:04:54.928 "params": { 00:04:54.928 "small_cache_size": 128, 00:04:54.928 "large_cache_size": 16, 00:04:54.928 "task_count": 2048, 00:04:54.928 "sequence_count": 2048, 00:04:54.928 "buf_count": 2048 00:04:54.928 } 00:04:54.928 } 00:04:54.928 ] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "bdev", 00:04:54.928 "config": [ 00:04:54.928 { 00:04:54.928 "method": "bdev_set_options", 00:04:54.928 "params": { 00:04:54.928 "bdev_io_pool_size": 65535, 00:04:54.928 "bdev_io_cache_size": 256, 00:04:54.928 "bdev_auto_examine": true, 00:04:54.928 "iobuf_small_cache_size": 128, 00:04:54.928 "iobuf_large_cache_size": 16 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "bdev_raid_set_options", 00:04:54.928 "params": { 00:04:54.928 "process_window_size_kb": 1024 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "bdev_iscsi_set_options", 00:04:54.928 "params": { 00:04:54.928 "timeout_sec": 30 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "bdev_nvme_set_options", 00:04:54.928 "params": { 00:04:54.928 "action_on_timeout": "none", 00:04:54.928 "timeout_us": 0, 00:04:54.928 "timeout_admin_us": 0, 00:04:54.928 "keep_alive_timeout_ms": 10000, 00:04:54.928 "arbitration_burst": 0, 00:04:54.928 "low_priority_weight": 0, 00:04:54.928 "medium_priority_weight": 0, 00:04:54.928 "high_priority_weight": 0, 00:04:54.928 "nvme_adminq_poll_period_us": 10000, 00:04:54.928 "nvme_ioq_poll_period_us": 0, 00:04:54.928 "io_queue_requests": 0, 00:04:54.928 "delay_cmd_submit": true, 00:04:54.928 "transport_retry_count": 4, 00:04:54.928 "bdev_retry_count": 3, 00:04:54.928 "transport_ack_timeout": 0, 00:04:54.928 "ctrlr_loss_timeout_sec": 0, 00:04:54.928 "reconnect_delay_sec": 0, 00:04:54.928 "fast_io_fail_timeout_sec": 0, 00:04:54.928 "disable_auto_failback": false, 00:04:54.928 "generate_uuids": false, 00:04:54.928 "transport_tos": 0, 00:04:54.928 "nvme_error_stat": false, 00:04:54.928 "rdma_srq_size": 0, 00:04:54.928 "io_path_stat": false, 00:04:54.928 "allow_accel_sequence": false, 00:04:54.928 "rdma_max_cq_size": 0, 00:04:54.928 "rdma_cm_event_timeout_ms": 0, 00:04:54.928 "dhchap_digests": [ 00:04:54.928 "sha256", 00:04:54.928 "sha384", 00:04:54.928 "sha512" 00:04:54.928 ], 00:04:54.928 "dhchap_dhgroups": [ 00:04:54.928 "null", 00:04:54.928 "ffdhe2048", 00:04:54.928 "ffdhe3072", 00:04:54.928 "ffdhe4096", 00:04:54.928 "ffdhe6144", 00:04:54.928 "ffdhe8192" 00:04:54.928 ] 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "bdev_nvme_set_hotplug", 00:04:54.928 "params": { 00:04:54.928 "period_us": 100000, 00:04:54.928 "enable": false 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "bdev_wait_for_examine" 00:04:54.928 } 00:04:54.928 ] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "scsi", 00:04:54.928 "config": null 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "scheduler", 00:04:54.928 "config": [ 00:04:54.928 { 00:04:54.928 "method": "framework_set_scheduler", 00:04:54.928 "params": { 00:04:54.928 "name": "static" 00:04:54.928 } 00:04:54.928 } 00:04:54.928 ] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "vhost_scsi", 00:04:54.928 "config": [] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "vhost_blk", 00:04:54.928 "config": [] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "ublk", 00:04:54.928 "config": [] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "nbd", 00:04:54.928 "config": [] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "nvmf", 00:04:54.928 "config": [ 00:04:54.928 { 00:04:54.928 "method": "nvmf_set_config", 00:04:54.928 "params": { 00:04:54.928 "discovery_filter": "match_any", 00:04:54.928 "admin_cmd_passthru": { 00:04:54.928 "identify_ctrlr": false 00:04:54.928 } 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "nvmf_set_max_subsystems", 00:04:54.928 "params": { 00:04:54.928 "max_subsystems": 1024 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "nvmf_set_crdt", 00:04:54.928 "params": { 00:04:54.928 "crdt1": 0, 00:04:54.928 "crdt2": 0, 00:04:54.928 "crdt3": 0 00:04:54.928 } 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "method": "nvmf_create_transport", 00:04:54.928 "params": { 00:04:54.928 "trtype": "TCP", 00:04:54.928 "max_queue_depth": 128, 00:04:54.928 "max_io_qpairs_per_ctrlr": 127, 00:04:54.928 "in_capsule_data_size": 4096, 00:04:54.928 "max_io_size": 131072, 00:04:54.928 "io_unit_size": 131072, 00:04:54.928 "max_aq_depth": 128, 00:04:54.928 "num_shared_buffers": 511, 00:04:54.928 "buf_cache_size": 4294967295, 00:04:54.928 "dif_insert_or_strip": false, 00:04:54.928 "zcopy": false, 00:04:54.928 "c2h_success": true, 00:04:54.928 "sock_priority": 0, 00:04:54.928 "abort_timeout_sec": 1, 00:04:54.928 "ack_timeout": 0, 00:04:54.928 "data_wr_pool_size": 0 00:04:54.928 } 00:04:54.928 } 00:04:54.928 ] 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "subsystem": "iscsi", 00:04:54.928 "config": [ 00:04:54.928 { 00:04:54.928 "method": "iscsi_set_options", 00:04:54.928 "params": { 00:04:54.928 "node_base": "iqn.2016-06.io.spdk", 00:04:54.928 "max_sessions": 128, 00:04:54.928 "max_connections_per_session": 2, 00:04:54.928 "max_queue_depth": 64, 00:04:54.928 "default_time2wait": 2, 00:04:54.928 "default_time2retain": 20, 00:04:54.928 "first_burst_length": 8192, 00:04:54.928 "immediate_data": true, 00:04:54.928 "allow_duplicated_isid": false, 00:04:54.928 "error_recovery_level": 0, 00:04:54.928 "nop_timeout": 60, 00:04:54.928 "nop_in_interval": 30, 00:04:54.928 "disable_chap": false, 00:04:54.928 "require_chap": false, 00:04:54.928 "mutual_chap": false, 00:04:54.928 "chap_group": 0, 00:04:54.928 "max_large_datain_per_connection": 64, 00:04:54.928 "max_r2t_per_connection": 4, 00:04:54.928 "pdu_pool_size": 36864, 00:04:54.928 "immediate_data_pool_size": 16384, 00:04:54.928 "data_out_pool_size": 2048 00:04:54.928 } 00:04:54.928 } 00:04:54.928 ] 00:04:54.928 } 00:04:54.928 ] 00:04:54.928 } 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1937925 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 1937925 ']' 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 1937925 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1937925 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1937925' 00:04:54.929 killing process with pid 1937925 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 1937925 00:04:54.929 12:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 1937925 00:04:55.187 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1938127 00:04:55.187 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:55.187 12:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1938127 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 1938127 ']' 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 1938127 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1938127 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1938127' 00:05:00.456 killing process with pid 1938127 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 1938127 00:05:00.456 12:06:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 1938127 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:00.716 00:05:00.716 real 0m6.798s 00:05:00.716 user 0m6.588s 00:05:00.716 sys 0m0.647s 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.716 ************************************ 00:05:00.716 END TEST skip_rpc_with_json 00:05:00.716 ************************************ 00:05:00.716 12:06:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:00.716 12:06:29 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:00.716 12:06:29 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:00.716 12:06:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.716 ************************************ 00:05:00.716 START TEST skip_rpc_with_delay 00:05:00.716 ************************************ 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.716 [2024-05-15 12:06:29.179235] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:00.716 [2024-05-15 12:06:29.179300] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:00.716 00:05:00.716 real 0m0.066s 00:05:00.716 user 0m0.043s 00:05:00.716 sys 0m0.022s 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:00.716 12:06:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:00.716 ************************************ 00:05:00.716 END TEST skip_rpc_with_delay 00:05:00.716 ************************************ 00:05:00.716 12:06:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.716 12:06:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.716 12:06:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.716 12:06:29 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:00.716 12:06:29 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:00.716 12:06:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.975 ************************************ 00:05:00.975 START TEST exit_on_failed_rpc_init 00:05:00.975 ************************************ 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1939169 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1939169 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 1939169 ']' 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.975 12:06:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.975 [2024-05-15 12:06:29.332844] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:00.975 [2024-05-15 12:06:29.332891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1939169 ] 00:05:00.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.975 [2024-05-15 12:06:29.400922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.975 [2024-05-15 12:06:29.473595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.912 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:01.912 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:05:01.912 12:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.912 12:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.912 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:01.913 [2024-05-15 12:06:30.180841] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:01.913 [2024-05-15 12:06:30.180892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1939430 ] 00:05:01.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.913 [2024-05-15 12:06:30.249356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.913 [2024-05-15 12:06:30.318873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.913 [2024-05-15 12:06:30.318954] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:01.913 [2024-05-15 12:06:30.318966] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:01.913 [2024-05-15 12:06:30.318975] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1939169 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 1939169 ']' 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 1939169 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:01.913 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1939169 00:05:02.171 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:02.171 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:02.171 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1939169' 00:05:02.171 killing process with pid 1939169 00:05:02.171 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 1939169 00:05:02.171 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 1939169 00:05:02.430 00:05:02.430 real 0m1.510s 00:05:02.430 user 0m1.691s 00:05:02.430 sys 0m0.469s 00:05:02.430 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:02.430 12:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.430 ************************************ 00:05:02.430 END TEST exit_on_failed_rpc_init 00:05:02.430 ************************************ 00:05:02.430 12:06:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.430 00:05:02.430 real 0m14.194s 00:05:02.430 user 0m13.615s 00:05:02.430 sys 0m1.718s 00:05:02.430 12:06:30 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:02.430 12:06:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.430 ************************************ 00:05:02.430 END TEST skip_rpc 00:05:02.430 ************************************ 00:05:02.430 12:06:30 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:02.430 12:06:30 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:02.430 12:06:30 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:02.430 12:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.430 ************************************ 00:05:02.430 START TEST rpc_client 00:05:02.430 ************************************ 00:05:02.430 12:06:30 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:02.690 * Looking for test storage... 00:05:02.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:02.690 12:06:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:02.690 OK 00:05:02.690 12:06:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:02.690 00:05:02.690 real 0m0.138s 00:05:02.690 user 0m0.049s 00:05:02.690 sys 0m0.099s 00:05:02.690 12:06:31 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:02.690 12:06:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 ************************************ 00:05:02.690 END TEST rpc_client 00:05:02.690 ************************************ 00:05:02.690 12:06:31 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:02.690 12:06:31 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:02.690 12:06:31 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:02.690 12:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 ************************************ 00:05:02.690 START TEST json_config 00:05:02.690 ************************************ 00:05:02.690 12:06:31 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:02.690 12:06:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.690 12:06:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.950 12:06:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.951 12:06:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.951 12:06:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.951 12:06:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.951 12:06:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.951 12:06:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.951 12:06:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.951 12:06:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:02.951 12:06:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@47 -- # : 0 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:02.951 12:06:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:02.951 INFO: JSON configuration test init 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.951 12:06:31 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:02.951 12:06:31 json_config -- json_config/common.sh@9 -- # local app=target 00:05:02.951 12:06:31 json_config -- json_config/common.sh@10 -- # shift 00:05:02.951 12:06:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.951 12:06:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.951 12:06:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.951 12:06:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.951 12:06:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.951 12:06:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1939598 00:05:02.951 12:06:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.951 Waiting for target to run... 00:05:02.951 12:06:31 json_config -- json_config/common.sh@25 -- # waitforlisten 1939598 /var/tmp/spdk_tgt.sock 00:05:02.951 12:06:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@828 -- # '[' -z 1939598 ']' 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:02.951 12:06:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.951 [2024-05-15 12:06:31.321808] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:02.951 [2024-05-15 12:06:31.321858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1939598 ] 00:05:02.951 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.518 [2024-05-15 12:06:31.750818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.518 [2024-05-15 12:06:31.839926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.777 12:06:32 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:03.777 12:06:32 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:03.777 12:06:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:03.777 00:05:03.777 12:06:32 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:03.777 12:06:32 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:03.777 12:06:32 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:03.777 12:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.777 12:06:32 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:03.777 12:06:32 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:03.777 12:06:32 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:03.777 12:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.777 12:06:32 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:03.777 12:06:32 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:03.777 12:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:07.056 12:06:35 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:07.056 12:06:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:07.056 12:06:35 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:07.056 12:06:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.056 12:06:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:07.056 12:06:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:07.056 12:06:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:07.056 12:06:35 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:07.057 12:06:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:07.057 12:06:35 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:07.057 12:06:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:07.057 12:06:35 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:07.057 12:06:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:07.057 12:06:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.057 12:06:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:07.315 MallocForNvmf0 00:05:07.315 12:06:35 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:07.315 12:06:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:07.315 MallocForNvmf1 00:05:07.315 12:06:35 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:07.315 12:06:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:07.573 [2024-05-15 12:06:35.979198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.573 12:06:35 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:07.573 12:06:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:07.831 12:06:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.831 12:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.831 12:06:36 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.831 12:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:08.089 12:06:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:08.089 12:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:08.348 [2024-05-15 12:06:36.648945] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:08.348 [2024-05-15 12:06:36.649353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:08.348 12:06:36 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:08.348 12:06:36 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:08.348 12:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.348 12:06:36 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:08.348 12:06:36 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:08.348 12:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.348 12:06:36 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:08.348 12:06:36 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:08.348 12:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:08.606 MallocBdevForConfigChangeCheck 00:05:08.606 12:06:36 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:08.606 12:06:36 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:08.606 12:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.606 12:06:36 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:08.606 12:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.864 12:06:37 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:08.864 INFO: shutting down applications... 00:05:08.864 12:06:37 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:08.864 12:06:37 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:08.864 12:06:37 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:08.864 12:06:37 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:11.410 Calling clear_iscsi_subsystem 00:05:11.410 Calling clear_nvmf_subsystem 00:05:11.410 Calling clear_nbd_subsystem 00:05:11.410 Calling clear_ublk_subsystem 00:05:11.410 Calling clear_vhost_blk_subsystem 00:05:11.410 Calling clear_vhost_scsi_subsystem 00:05:11.410 Calling clear_bdev_subsystem 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@345 -- # break 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:11.410 12:06:39 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:11.410 12:06:39 json_config -- json_config/common.sh@31 -- # local app=target 00:05:11.410 12:06:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.410 12:06:39 json_config -- json_config/common.sh@35 -- # [[ -n 1939598 ]] 00:05:11.410 12:06:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1939598 00:05:11.410 [2024-05-15 12:06:39.738499] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:11.410 12:06:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.410 12:06:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.410 12:06:39 json_config -- json_config/common.sh@41 -- # kill -0 1939598 00:05:11.410 12:06:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.990 12:06:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.990 12:06:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.990 12:06:40 json_config -- json_config/common.sh@41 -- # kill -0 1939598 00:05:11.990 12:06:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.990 12:06:40 json_config -- json_config/common.sh@43 -- # break 00:05:11.990 12:06:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.990 12:06:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.990 SPDK target shutdown done 00:05:11.990 12:06:40 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:11.990 INFO: relaunching applications... 00:05:11.990 12:06:40 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.990 12:06:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.990 12:06:40 json_config -- json_config/common.sh@10 -- # shift 00:05:11.990 12:06:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.990 12:06:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.990 12:06:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.990 12:06:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.990 12:06:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.990 12:06:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1941290 00:05:11.990 12:06:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.990 Waiting for target to run... 00:05:11.990 12:06:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.990 12:06:40 json_config -- json_config/common.sh@25 -- # waitforlisten 1941290 /var/tmp/spdk_tgt.sock 00:05:11.990 12:06:40 json_config -- common/autotest_common.sh@828 -- # '[' -z 1941290 ']' 00:05:11.990 12:06:40 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.990 12:06:40 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:11.990 12:06:40 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.990 12:06:40 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:11.990 12:06:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.990 [2024-05-15 12:06:40.299938] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:11.990 [2024-05-15 12:06:40.299998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1941290 ] 00:05:11.990 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.248 [2024-05-15 12:06:40.736016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.507 [2024-05-15 12:06:40.822162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.791 [2024-05-15 12:06:43.845352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.791 [2024-05-15 12:06:43.877361] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:15.791 [2024-05-15 12:06:43.877763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.050 12:06:44 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:16.050 12:06:44 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:16.050 12:06:44 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.050 00:05:16.050 12:06:44 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:16.050 12:06:44 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:16.050 INFO: Checking if target configuration is the same... 00:05:16.050 12:06:44 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:16.050 12:06:44 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.050 12:06:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.050 + '[' 2 -ne 2 ']' 00:05:16.050 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:16.050 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:16.050 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.050 +++ basename /dev/fd/62 00:05:16.050 ++ mktemp /tmp/62.XXX 00:05:16.050 + tmp_file_1=/tmp/62.rIS 00:05:16.050 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.050 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.050 + tmp_file_2=/tmp/spdk_tgt_config.json.Px6 00:05:16.050 + ret=0 00:05:16.050 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.309 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.309 + diff -u /tmp/62.rIS /tmp/spdk_tgt_config.json.Px6 00:05:16.309 + echo 'INFO: JSON config files are the same' 00:05:16.309 INFO: JSON config files are the same 00:05:16.309 + rm /tmp/62.rIS /tmp/spdk_tgt_config.json.Px6 00:05:16.309 + exit 0 00:05:16.309 12:06:44 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:16.309 12:06:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:16.309 INFO: changing configuration and checking if this can be detected... 00:05:16.309 12:06:44 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.309 12:06:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:16.567 12:06:44 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.567 12:06:44 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:16.567 12:06:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.567 + '[' 2 -ne 2 ']' 00:05:16.567 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:16.567 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:16.567 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:16.567 +++ basename /dev/fd/62 00:05:16.567 ++ mktemp /tmp/62.XXX 00:05:16.567 + tmp_file_1=/tmp/62.V52 00:05:16.567 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.567 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:16.567 + tmp_file_2=/tmp/spdk_tgt_config.json.diI 00:05:16.567 + ret=0 00:05:16.567 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.825 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:16.825 + diff -u /tmp/62.V52 /tmp/spdk_tgt_config.json.diI 00:05:16.825 + ret=1 00:05:16.825 + echo '=== Start of file: /tmp/62.V52 ===' 00:05:16.825 + cat /tmp/62.V52 00:05:16.825 + echo '=== End of file: /tmp/62.V52 ===' 00:05:16.825 + echo '' 00:05:16.825 + echo '=== Start of file: /tmp/spdk_tgt_config.json.diI ===' 00:05:16.825 + cat /tmp/spdk_tgt_config.json.diI 00:05:16.825 + echo '=== End of file: /tmp/spdk_tgt_config.json.diI ===' 00:05:16.825 + echo '' 00:05:16.825 + rm /tmp/62.V52 /tmp/spdk_tgt_config.json.diI 00:05:16.825 + exit 1 00:05:16.825 12:06:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:16.825 INFO: configuration change detected. 00:05:16.825 12:06:45 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:16.825 12:06:45 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:16.825 12:06:45 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:16.825 12:06:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.825 12:06:45 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:16.825 12:06:45 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:16.825 12:06:45 json_config -- json_config/json_config.sh@317 -- # [[ -n 1941290 ]] 00:05:16.825 12:06:45 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:16.825 12:06:45 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:16.825 12:06:45 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:16.825 12:06:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.083 12:06:45 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:17.083 12:06:45 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:17.083 12:06:45 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:17.083 12:06:45 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:17.083 12:06:45 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:17.083 12:06:45 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.083 12:06:45 json_config -- json_config/json_config.sh@323 -- # killprocess 1941290 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@947 -- # '[' -z 1941290 ']' 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@951 -- # kill -0 1941290 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@952 -- # uname 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1941290 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1941290' 00:05:17.083 killing process with pid 1941290 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@966 -- # kill 1941290 00:05:17.083 [2024-05-15 12:06:45.469369] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:17.083 12:06:45 json_config -- common/autotest_common.sh@971 -- # wait 1941290 00:05:18.984 12:06:47 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.984 12:06:47 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:18.984 12:06:47 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:18.984 12:06:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.244 12:06:47 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:19.244 12:06:47 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:19.244 INFO: Success 00:05:19.244 00:05:19.244 real 0m16.394s 00:05:19.244 user 0m16.815s 00:05:19.244 sys 0m2.312s 00:05:19.244 12:06:47 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:19.244 12:06:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.244 ************************************ 00:05:19.244 END TEST json_config 00:05:19.244 ************************************ 00:05:19.244 12:06:47 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.244 12:06:47 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:19.244 12:06:47 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:19.244 12:06:47 -- common/autotest_common.sh@10 -- # set +x 00:05:19.244 ************************************ 00:05:19.244 START TEST json_config_extra_key 00:05:19.244 ************************************ 00:05:19.244 12:06:47 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.244 12:06:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.244 12:06:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.244 12:06:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.244 12:06:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.244 12:06:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.244 12:06:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.244 12:06:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.244 12:06:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.244 12:06:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.244 INFO: launching applications... 00:05:19.244 12:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1942735 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.244 Waiting for target to run... 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1942735 /var/tmp/spdk_tgt.sock 00:05:19.244 12:06:47 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 1942735 ']' 00:05:19.244 12:06:47 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.244 12:06:47 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.244 12:06:47 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:19.244 12:06:47 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.244 12:06:47 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:19.244 12:06:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.244 [2024-05-15 12:06:47.773160] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:19.244 [2024-05-15 12:06:47.773218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942735 ] 00:05:19.503 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.761 [2024-05-15 12:06:48.063912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.761 [2024-05-15 12:06:48.127822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.327 12:06:48 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:20.327 12:06:48 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:20.327 00:05:20.327 12:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:20.327 INFO: shutting down applications... 00:05:20.327 12:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1942735 ]] 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1942735 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1942735 00:05:20.327 12:06:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.587 12:06:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.587 12:06:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.587 12:06:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1942735 00:05:20.587 12:06:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.587 12:06:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.587 12:06:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.587 12:06:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.587 SPDK target shutdown done 00:05:20.587 12:06:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.587 Success 00:05:20.587 00:05:20.587 real 0m1.458s 00:05:20.587 user 0m1.208s 00:05:20.587 sys 0m0.412s 00:05:20.587 12:06:49 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:20.587 12:06:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.587 ************************************ 00:05:20.587 END TEST json_config_extra_key 00:05:20.587 ************************************ 00:05:20.587 12:06:49 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.587 12:06:49 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:20.587 12:06:49 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:20.587 12:06:49 -- common/autotest_common.sh@10 -- # set +x 00:05:20.846 ************************************ 00:05:20.846 START TEST alias_rpc 00:05:20.846 ************************************ 00:05:20.846 12:06:49 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.846 * Looking for test storage... 00:05:20.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:20.846 12:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.846 12:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1943046 00:05:20.846 12:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1943046 00:05:20.846 12:06:49 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 1943046 ']' 00:05:20.846 12:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.846 12:06:49 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.846 12:06:49 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:20.846 12:06:49 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.846 12:06:49 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:20.846 12:06:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.846 [2024-05-15 12:06:49.298572] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:20.846 [2024-05-15 12:06:49.298621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943046 ] 00:05:20.846 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.846 [2024-05-15 12:06:49.368177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.105 [2024-05-15 12:06:49.443965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.672 12:06:50 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:21.672 12:06:50 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:21.672 12:06:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:21.930 12:06:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1943046 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 1943046 ']' 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 1943046 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1943046 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1943046' 00:05:21.930 killing process with pid 1943046 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@966 -- # kill 1943046 00:05:21.930 12:06:50 alias_rpc -- common/autotest_common.sh@971 -- # wait 1943046 00:05:22.189 00:05:22.189 real 0m1.522s 00:05:22.189 user 0m1.630s 00:05:22.189 sys 0m0.427s 00:05:22.189 12:06:50 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:22.189 12:06:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.189 ************************************ 00:05:22.189 END TEST alias_rpc 00:05:22.189 ************************************ 00:05:22.189 12:06:50 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:22.189 12:06:50 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.189 12:06:50 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:22.189 12:06:50 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.189 12:06:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.448 ************************************ 00:05:22.448 START TEST spdkcli_tcp 00:05:22.448 ************************************ 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.448 * Looking for test storage... 00:05:22.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1943372 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.448 12:06:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1943372 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 1943372 ']' 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:22.448 12:06:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.448 [2024-05-15 12:06:50.930261] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:22.448 [2024-05-15 12:06:50.930309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943372 ] 00:05:22.448 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.707 [2024-05-15 12:06:50.999155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.707 [2024-05-15 12:06:51.074756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.707 [2024-05-15 12:06:51.074759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.276 12:06:51 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:23.276 12:06:51 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:05:23.276 12:06:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1943631 00:05:23.276 12:06:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:23.276 12:06:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.537 [ 00:05:23.537 "bdev_malloc_delete", 00:05:23.537 "bdev_malloc_create", 00:05:23.537 "bdev_null_resize", 00:05:23.537 "bdev_null_delete", 00:05:23.537 "bdev_null_create", 00:05:23.537 "bdev_nvme_cuse_unregister", 00:05:23.537 "bdev_nvme_cuse_register", 00:05:23.537 "bdev_opal_new_user", 00:05:23.537 "bdev_opal_set_lock_state", 00:05:23.537 "bdev_opal_delete", 00:05:23.537 "bdev_opal_get_info", 00:05:23.537 "bdev_opal_create", 00:05:23.537 "bdev_nvme_opal_revert", 00:05:23.537 "bdev_nvme_opal_init", 00:05:23.537 "bdev_nvme_send_cmd", 00:05:23.537 "bdev_nvme_get_path_iostat", 00:05:23.537 "bdev_nvme_get_mdns_discovery_info", 00:05:23.537 "bdev_nvme_stop_mdns_discovery", 00:05:23.537 "bdev_nvme_start_mdns_discovery", 00:05:23.537 "bdev_nvme_set_multipath_policy", 00:05:23.537 "bdev_nvme_set_preferred_path", 00:05:23.537 "bdev_nvme_get_io_paths", 00:05:23.537 "bdev_nvme_remove_error_injection", 00:05:23.537 "bdev_nvme_add_error_injection", 00:05:23.537 "bdev_nvme_get_discovery_info", 00:05:23.537 "bdev_nvme_stop_discovery", 00:05:23.537 "bdev_nvme_start_discovery", 00:05:23.537 "bdev_nvme_get_controller_health_info", 00:05:23.537 "bdev_nvme_disable_controller", 00:05:23.537 "bdev_nvme_enable_controller", 00:05:23.537 "bdev_nvme_reset_controller", 00:05:23.537 "bdev_nvme_get_transport_statistics", 00:05:23.537 "bdev_nvme_apply_firmware", 00:05:23.537 "bdev_nvme_detach_controller", 00:05:23.537 "bdev_nvme_get_controllers", 00:05:23.537 "bdev_nvme_attach_controller", 00:05:23.537 "bdev_nvme_set_hotplug", 00:05:23.537 "bdev_nvme_set_options", 00:05:23.537 "bdev_passthru_delete", 00:05:23.537 "bdev_passthru_create", 00:05:23.537 "bdev_lvol_check_shallow_copy", 00:05:23.537 "bdev_lvol_start_shallow_copy", 00:05:23.537 "bdev_lvol_grow_lvstore", 00:05:23.537 "bdev_lvol_get_lvols", 00:05:23.537 "bdev_lvol_get_lvstores", 00:05:23.537 "bdev_lvol_delete", 00:05:23.537 "bdev_lvol_set_read_only", 00:05:23.537 "bdev_lvol_resize", 00:05:23.537 "bdev_lvol_decouple_parent", 00:05:23.537 "bdev_lvol_inflate", 00:05:23.537 "bdev_lvol_rename", 00:05:23.537 "bdev_lvol_clone_bdev", 00:05:23.537 "bdev_lvol_clone", 00:05:23.537 "bdev_lvol_snapshot", 00:05:23.537 "bdev_lvol_create", 00:05:23.537 "bdev_lvol_delete_lvstore", 00:05:23.537 "bdev_lvol_rename_lvstore", 00:05:23.537 "bdev_lvol_create_lvstore", 00:05:23.537 "bdev_raid_set_options", 00:05:23.537 "bdev_raid_remove_base_bdev", 00:05:23.537 "bdev_raid_add_base_bdev", 00:05:23.537 "bdev_raid_delete", 00:05:23.537 "bdev_raid_create", 00:05:23.537 "bdev_raid_get_bdevs", 00:05:23.537 "bdev_error_inject_error", 00:05:23.537 "bdev_error_delete", 00:05:23.537 "bdev_error_create", 00:05:23.538 "bdev_split_delete", 00:05:23.538 "bdev_split_create", 00:05:23.538 "bdev_delay_delete", 00:05:23.538 "bdev_delay_create", 00:05:23.538 "bdev_delay_update_latency", 00:05:23.538 "bdev_zone_block_delete", 00:05:23.538 "bdev_zone_block_create", 00:05:23.538 "blobfs_create", 00:05:23.538 "blobfs_detect", 00:05:23.538 "blobfs_set_cache_size", 00:05:23.538 "bdev_aio_delete", 00:05:23.538 "bdev_aio_rescan", 00:05:23.538 "bdev_aio_create", 00:05:23.538 "bdev_ftl_set_property", 00:05:23.538 "bdev_ftl_get_properties", 00:05:23.538 "bdev_ftl_get_stats", 00:05:23.538 "bdev_ftl_unmap", 00:05:23.538 "bdev_ftl_unload", 00:05:23.538 "bdev_ftl_delete", 00:05:23.538 "bdev_ftl_load", 00:05:23.538 "bdev_ftl_create", 00:05:23.538 "bdev_virtio_attach_controller", 00:05:23.538 "bdev_virtio_scsi_get_devices", 00:05:23.538 "bdev_virtio_detach_controller", 00:05:23.538 "bdev_virtio_blk_set_hotplug", 00:05:23.538 "bdev_iscsi_delete", 00:05:23.538 "bdev_iscsi_create", 00:05:23.538 "bdev_iscsi_set_options", 00:05:23.538 "accel_error_inject_error", 00:05:23.538 "ioat_scan_accel_module", 00:05:23.538 "dsa_scan_accel_module", 00:05:23.538 "iaa_scan_accel_module", 00:05:23.538 "vfu_virtio_create_scsi_endpoint", 00:05:23.538 "vfu_virtio_scsi_remove_target", 00:05:23.538 "vfu_virtio_scsi_add_target", 00:05:23.538 "vfu_virtio_create_blk_endpoint", 00:05:23.538 "vfu_virtio_delete_endpoint", 00:05:23.538 "keyring_file_remove_key", 00:05:23.538 "keyring_file_add_key", 00:05:23.538 "iscsi_get_histogram", 00:05:23.538 "iscsi_enable_histogram", 00:05:23.538 "iscsi_set_options", 00:05:23.538 "iscsi_get_auth_groups", 00:05:23.538 "iscsi_auth_group_remove_secret", 00:05:23.538 "iscsi_auth_group_add_secret", 00:05:23.538 "iscsi_delete_auth_group", 00:05:23.538 "iscsi_create_auth_group", 00:05:23.538 "iscsi_set_discovery_auth", 00:05:23.538 "iscsi_get_options", 00:05:23.538 "iscsi_target_node_request_logout", 00:05:23.538 "iscsi_target_node_set_redirect", 00:05:23.538 "iscsi_target_node_set_auth", 00:05:23.538 "iscsi_target_node_add_lun", 00:05:23.538 "iscsi_get_stats", 00:05:23.538 "iscsi_get_connections", 00:05:23.538 "iscsi_portal_group_set_auth", 00:05:23.538 "iscsi_start_portal_group", 00:05:23.538 "iscsi_delete_portal_group", 00:05:23.538 "iscsi_create_portal_group", 00:05:23.538 "iscsi_get_portal_groups", 00:05:23.538 "iscsi_delete_target_node", 00:05:23.538 "iscsi_target_node_remove_pg_ig_maps", 00:05:23.538 "iscsi_target_node_add_pg_ig_maps", 00:05:23.538 "iscsi_create_target_node", 00:05:23.538 "iscsi_get_target_nodes", 00:05:23.538 "iscsi_delete_initiator_group", 00:05:23.538 "iscsi_initiator_group_remove_initiators", 00:05:23.538 "iscsi_initiator_group_add_initiators", 00:05:23.538 "iscsi_create_initiator_group", 00:05:23.538 "iscsi_get_initiator_groups", 00:05:23.538 "nvmf_set_crdt", 00:05:23.538 "nvmf_set_config", 00:05:23.538 "nvmf_set_max_subsystems", 00:05:23.538 "nvmf_stop_mdns_prr", 00:05:23.538 "nvmf_publish_mdns_prr", 00:05:23.538 "nvmf_subsystem_get_listeners", 00:05:23.538 "nvmf_subsystem_get_qpairs", 00:05:23.538 "nvmf_subsystem_get_controllers", 00:05:23.538 "nvmf_get_stats", 00:05:23.538 "nvmf_get_transports", 00:05:23.538 "nvmf_create_transport", 00:05:23.538 "nvmf_get_targets", 00:05:23.538 "nvmf_delete_target", 00:05:23.538 "nvmf_create_target", 00:05:23.538 "nvmf_subsystem_allow_any_host", 00:05:23.538 "nvmf_subsystem_remove_host", 00:05:23.538 "nvmf_subsystem_add_host", 00:05:23.538 "nvmf_ns_remove_host", 00:05:23.538 "nvmf_ns_add_host", 00:05:23.538 "nvmf_subsystem_remove_ns", 00:05:23.538 "nvmf_subsystem_add_ns", 00:05:23.538 "nvmf_subsystem_listener_set_ana_state", 00:05:23.538 "nvmf_discovery_get_referrals", 00:05:23.538 "nvmf_discovery_remove_referral", 00:05:23.538 "nvmf_discovery_add_referral", 00:05:23.538 "nvmf_subsystem_remove_listener", 00:05:23.538 "nvmf_subsystem_add_listener", 00:05:23.538 "nvmf_delete_subsystem", 00:05:23.538 "nvmf_create_subsystem", 00:05:23.538 "nvmf_get_subsystems", 00:05:23.538 "env_dpdk_get_mem_stats", 00:05:23.538 "nbd_get_disks", 00:05:23.538 "nbd_stop_disk", 00:05:23.538 "nbd_start_disk", 00:05:23.538 "ublk_recover_disk", 00:05:23.538 "ublk_get_disks", 00:05:23.538 "ublk_stop_disk", 00:05:23.538 "ublk_start_disk", 00:05:23.538 "ublk_destroy_target", 00:05:23.538 "ublk_create_target", 00:05:23.538 "virtio_blk_create_transport", 00:05:23.538 "virtio_blk_get_transports", 00:05:23.538 "vhost_controller_set_coalescing", 00:05:23.538 "vhost_get_controllers", 00:05:23.538 "vhost_delete_controller", 00:05:23.538 "vhost_create_blk_controller", 00:05:23.538 "vhost_scsi_controller_remove_target", 00:05:23.538 "vhost_scsi_controller_add_target", 00:05:23.538 "vhost_start_scsi_controller", 00:05:23.538 "vhost_create_scsi_controller", 00:05:23.538 "thread_set_cpumask", 00:05:23.538 "framework_get_scheduler", 00:05:23.538 "framework_set_scheduler", 00:05:23.538 "framework_get_reactors", 00:05:23.538 "thread_get_io_channels", 00:05:23.538 "thread_get_pollers", 00:05:23.538 "thread_get_stats", 00:05:23.538 "framework_monitor_context_switch", 00:05:23.538 "spdk_kill_instance", 00:05:23.538 "log_enable_timestamps", 00:05:23.538 "log_get_flags", 00:05:23.538 "log_clear_flag", 00:05:23.538 "log_set_flag", 00:05:23.538 "log_get_level", 00:05:23.538 "log_set_level", 00:05:23.538 "log_get_print_level", 00:05:23.538 "log_set_print_level", 00:05:23.538 "framework_enable_cpumask_locks", 00:05:23.538 "framework_disable_cpumask_locks", 00:05:23.538 "framework_wait_init", 00:05:23.538 "framework_start_init", 00:05:23.538 "scsi_get_devices", 00:05:23.538 "bdev_get_histogram", 00:05:23.538 "bdev_enable_histogram", 00:05:23.538 "bdev_set_qos_limit", 00:05:23.538 "bdev_set_qd_sampling_period", 00:05:23.538 "bdev_get_bdevs", 00:05:23.538 "bdev_reset_iostat", 00:05:23.538 "bdev_get_iostat", 00:05:23.538 "bdev_examine", 00:05:23.538 "bdev_wait_for_examine", 00:05:23.538 "bdev_set_options", 00:05:23.538 "notify_get_notifications", 00:05:23.538 "notify_get_types", 00:05:23.538 "accel_get_stats", 00:05:23.538 "accel_set_options", 00:05:23.538 "accel_set_driver", 00:05:23.538 "accel_crypto_key_destroy", 00:05:23.538 "accel_crypto_keys_get", 00:05:23.538 "accel_crypto_key_create", 00:05:23.538 "accel_assign_opc", 00:05:23.538 "accel_get_module_info", 00:05:23.538 "accel_get_opc_assignments", 00:05:23.538 "vmd_rescan", 00:05:23.538 "vmd_remove_device", 00:05:23.538 "vmd_enable", 00:05:23.538 "sock_get_default_impl", 00:05:23.538 "sock_set_default_impl", 00:05:23.538 "sock_impl_set_options", 00:05:23.538 "sock_impl_get_options", 00:05:23.538 "iobuf_get_stats", 00:05:23.538 "iobuf_set_options", 00:05:23.538 "keyring_get_keys", 00:05:23.538 "framework_get_pci_devices", 00:05:23.538 "framework_get_config", 00:05:23.538 "framework_get_subsystems", 00:05:23.538 "vfu_tgt_set_base_path", 00:05:23.538 "trace_get_info", 00:05:23.538 "trace_get_tpoint_group_mask", 00:05:23.538 "trace_disable_tpoint_group", 00:05:23.538 "trace_enable_tpoint_group", 00:05:23.538 "trace_clear_tpoint_mask", 00:05:23.538 "trace_set_tpoint_mask", 00:05:23.538 "spdk_get_version", 00:05:23.538 "rpc_get_methods" 00:05:23.538 ] 00:05:23.538 12:06:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.538 12:06:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:23.538 12:06:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1943372 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 1943372 ']' 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 1943372 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1943372 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1943372' 00:05:23.538 killing process with pid 1943372 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 1943372 00:05:23.538 12:06:51 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 1943372 00:05:23.798 00:05:23.798 real 0m1.546s 00:05:23.798 user 0m2.777s 00:05:23.798 sys 0m0.488s 00:05:23.798 12:06:52 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:23.798 12:06:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.799 ************************************ 00:05:23.799 END TEST spdkcli_tcp 00:05:23.799 ************************************ 00:05:24.058 12:06:52 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.058 12:06:52 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:24.058 12:06:52 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:24.058 12:06:52 -- common/autotest_common.sh@10 -- # set +x 00:05:24.058 ************************************ 00:05:24.058 START TEST dpdk_mem_utility 00:05:24.058 ************************************ 00:05:24.058 12:06:52 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.058 * Looking for test storage... 00:05:24.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:24.058 12:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:24.058 12:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1943706 00:05:24.058 12:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.058 12:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1943706 00:05:24.058 12:06:52 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 1943706 ']' 00:05:24.058 12:06:52 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.058 12:06:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:24.058 12:06:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.058 12:06:52 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:24.058 12:06:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.058 [2024-05-15 12:06:52.553934] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:24.059 [2024-05-15 12:06:52.553985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943706 ] 00:05:24.059 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.319 [2024-05-15 12:06:52.625394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.319 [2024-05-15 12:06:52.698975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.887 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:24.887 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:05:24.887 12:06:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.887 12:06:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.887 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.887 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.887 { 00:05:24.887 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.887 } 00:05:24.887 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.887 12:06:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:24.887 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:24.887 1 heaps totaling size 814.000000 MiB 00:05:24.887 size: 814.000000 MiB heap id: 0 00:05:24.887 end heaps---------- 00:05:24.887 8 mempools totaling size 598.116089 MiB 00:05:24.887 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.887 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.887 size: 84.521057 MiB name: bdev_io_1943706 00:05:24.887 size: 51.011292 MiB name: evtpool_1943706 00:05:24.887 size: 50.003479 MiB name: msgpool_1943706 00:05:24.887 size: 21.763794 MiB name: PDU_Pool 00:05:24.887 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.887 size: 0.026123 MiB name: Session_Pool 00:05:24.887 end mempools------- 00:05:24.887 6 memzones totaling size 4.142822 MiB 00:05:24.887 size: 1.000366 MiB name: RG_ring_0_1943706 00:05:24.887 size: 1.000366 MiB name: RG_ring_1_1943706 00:05:24.887 size: 1.000366 MiB name: RG_ring_4_1943706 00:05:24.887 size: 1.000366 MiB name: RG_ring_5_1943706 00:05:24.887 size: 0.125366 MiB name: RG_ring_2_1943706 00:05:24.887 size: 0.015991 MiB name: RG_ring_3_1943706 00:05:24.887 end memzones------- 00:05:24.887 12:06:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.147 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:25.147 list of free elements. size: 12.519348 MiB 00:05:25.147 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:25.147 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:25.147 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:25.147 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:25.147 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:25.147 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:25.147 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:25.147 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:25.147 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:25.147 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:25.147 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:25.147 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:25.147 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:25.147 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:25.147 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:25.147 list of standard malloc elements. size: 199.218079 MiB 00:05:25.147 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:25.147 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:25.147 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:25.147 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:25.147 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:25.147 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:25.147 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:25.147 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.147 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:25.147 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:25.147 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:25.147 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:25.147 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:25.147 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:25.147 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:25.147 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:25.147 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:25.147 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:25.147 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:25.147 list of memzone associated elements. size: 602.262573 MiB 00:05:25.147 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:25.147 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.147 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:25.147 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.147 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:25.147 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1943706_0 00:05:25.147 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:25.147 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1943706_0 00:05:25.147 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:25.147 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1943706_0 00:05:25.147 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:25.147 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.147 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:25.147 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.147 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:25.147 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1943706 00:05:25.147 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:25.147 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1943706 00:05:25.147 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:25.147 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1943706 00:05:25.147 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:25.147 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.147 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:25.147 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.147 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:25.147 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.147 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:25.147 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.147 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:25.147 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1943706 00:05:25.147 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:25.147 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1943706 00:05:25.147 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:25.147 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1943706 00:05:25.147 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:25.147 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1943706 00:05:25.147 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:25.147 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1943706 00:05:25.147 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:25.147 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.147 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:25.147 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.147 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:25.147 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.147 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:25.147 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1943706 00:05:25.147 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:25.147 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.147 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:25.147 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.148 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:25.148 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1943706 00:05:25.148 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:25.148 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.148 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:25.148 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1943706 00:05:25.148 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:25.148 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1943706 00:05:25.148 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:25.148 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.148 12:06:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.148 12:06:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1943706 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 1943706 ']' 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 1943706 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1943706 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1943706' 00:05:25.148 killing process with pid 1943706 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 1943706 00:05:25.148 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 1943706 00:05:25.407 00:05:25.407 real 0m1.449s 00:05:25.407 user 0m1.485s 00:05:25.407 sys 0m0.445s 00:05:25.407 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:25.407 12:06:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.407 ************************************ 00:05:25.407 END TEST dpdk_mem_utility 00:05:25.407 ************************************ 00:05:25.407 12:06:53 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:25.407 12:06:53 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:25.407 12:06:53 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:25.407 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.407 ************************************ 00:05:25.407 START TEST event 00:05:25.407 ************************************ 00:05:25.407 12:06:53 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:25.667 * Looking for test storage... 00:05:25.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:25.667 12:06:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:25.667 12:06:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.667 12:06:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.667 12:06:54 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:25.667 12:06:54 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:25.667 12:06:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.667 ************************************ 00:05:25.667 START TEST event_perf 00:05:25.667 ************************************ 00:05:25.667 12:06:54 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.667 Running I/O for 1 seconds...[2024-05-15 12:06:54.077034] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:25.667 [2024-05-15 12:06:54.077094] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944039 ] 00:05:25.667 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.667 [2024-05-15 12:06:54.150818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.926 [2024-05-15 12:06:54.224977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.926 [2024-05-15 12:06:54.225072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.926 [2024-05-15 12:06:54.225157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.926 [2024-05-15 12:06:54.225161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.933 Running I/O for 1 seconds... 00:05:26.933 lcore 0: 201811 00:05:26.933 lcore 1: 201810 00:05:26.933 lcore 2: 201810 00:05:26.933 lcore 3: 201811 00:05:26.933 done. 00:05:26.933 00:05:26.933 real 0m1.255s 00:05:26.933 user 0m4.158s 00:05:26.933 sys 0m0.093s 00:05:26.933 12:06:55 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:26.933 12:06:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.933 ************************************ 00:05:26.933 END TEST event_perf 00:05:26.933 ************************************ 00:05:26.933 12:06:55 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:26.933 12:06:55 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:26.933 12:06:55 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:26.933 12:06:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.933 ************************************ 00:05:26.933 START TEST event_reactor 00:05:26.933 ************************************ 00:05:26.933 12:06:55 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:26.933 [2024-05-15 12:06:55.428953] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:26.933 [2024-05-15 12:06:55.429044] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944326 ] 00:05:27.192 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.192 [2024-05-15 12:06:55.503850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.192 [2024-05-15 12:06:55.574087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.131 test_start 00:05:28.131 oneshot 00:05:28.131 tick 100 00:05:28.131 tick 100 00:05:28.131 tick 250 00:05:28.131 tick 100 00:05:28.131 tick 100 00:05:28.131 tick 250 00:05:28.131 tick 100 00:05:28.131 tick 500 00:05:28.131 tick 100 00:05:28.131 tick 100 00:05:28.131 tick 250 00:05:28.131 tick 100 00:05:28.131 tick 100 00:05:28.131 test_end 00:05:28.131 00:05:28.131 real 0m1.251s 00:05:28.131 user 0m1.156s 00:05:28.131 sys 0m0.089s 00:05:28.131 12:06:56 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:28.131 12:06:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.131 ************************************ 00:05:28.131 END TEST event_reactor 00:05:28.131 ************************************ 00:05:28.391 12:06:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.391 12:06:56 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:28.391 12:06:56 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:28.391 12:06:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.391 ************************************ 00:05:28.391 START TEST event_reactor_perf 00:05:28.391 ************************************ 00:05:28.391 12:06:56 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.391 [2024-05-15 12:06:56.774171] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:28.391 [2024-05-15 12:06:56.774268] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944608 ] 00:05:28.391 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.391 [2024-05-15 12:06:56.849635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.391 [2024-05-15 12:06:56.919745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.771 test_start 00:05:29.771 test_end 00:05:29.771 Performance: 515958 events per second 00:05:29.771 00:05:29.771 real 0m1.255s 00:05:29.771 user 0m1.160s 00:05:29.771 sys 0m0.091s 00:05:29.771 12:06:58 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:29.771 12:06:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.771 ************************************ 00:05:29.771 END TEST event_reactor_perf 00:05:29.771 ************************************ 00:05:29.771 12:06:58 event -- event/event.sh@49 -- # uname -s 00:05:29.771 12:06:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.771 12:06:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:29.771 12:06:58 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:29.771 12:06:58 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:29.771 12:06:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.771 ************************************ 00:05:29.771 START TEST event_scheduler 00:05:29.771 ************************************ 00:05:29.771 12:06:58 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:29.771 * Looking for test storage... 00:05:29.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:29.771 12:06:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:29.771 12:06:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1944920 00:05:29.771 12:06:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.771 12:06:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1944920 00:05:29.771 12:06:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:29.771 12:06:58 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 1944920 ']' 00:05:29.771 12:06:58 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.771 12:06:58 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:29.771 12:06:58 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.771 12:06:58 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:29.771 12:06:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.771 [2024-05-15 12:06:58.243630] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:29.771 [2024-05-15 12:06:58.243675] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944920 ] 00:05:29.771 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.031 [2024-05-15 12:06:58.309558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.031 [2024-05-15 12:06:58.387289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.031 [2024-05-15 12:06:58.387372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.031 [2024-05-15 12:06:58.387456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.031 [2024-05-15 12:06:58.387458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.600 12:06:59 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:30.600 12:06:59 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:05:30.600 12:06:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.600 12:06:59 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.600 12:06:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.600 POWER: Env isn't set yet! 00:05:30.600 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:30.600 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.600 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.600 POWER: Attempting to initialise PSTAT power management... 00:05:30.600 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:30.600 POWER: Initialized successfully for lcore 0 power management 00:05:30.600 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:30.600 POWER: Initialized successfully for lcore 1 power management 00:05:30.600 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:30.600 POWER: Initialized successfully for lcore 2 power management 00:05:30.600 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:30.600 POWER: Initialized successfully for lcore 3 power management 00:05:30.600 12:06:59 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.600 12:06:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.600 12:06:59 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.600 12:06:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 [2024-05-15 12:06:59.192269] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.860 12:06:59 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.860 12:06:59 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:30.860 12:06:59 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 ************************************ 00:05:30.860 START TEST scheduler_create_thread 00:05:30.860 ************************************ 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 2 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 3 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 4 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 5 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 6 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 7 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 8 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 9 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 10 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.860 12:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.798 12:07:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:31.798 12:07:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.798 12:07:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.798 12:07:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:31.798 12:07:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.736 12:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:32.736 12:07:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:32.736 12:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:32.736 12:07:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.674 12:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.674 12:07:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.674 12:07:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.674 12:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.674 12:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.613 12:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.613 00:05:34.613 real 0m3.563s 00:05:34.613 user 0m0.025s 00:05:34.613 sys 0m0.006s 00:05:34.613 12:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:34.613 12:07:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.613 ************************************ 00:05:34.613 END TEST scheduler_create_thread 00:05:34.613 ************************************ 00:05:34.613 12:07:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.613 12:07:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1944920 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 1944920 ']' 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 1944920 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1944920 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1944920' 00:05:34.613 killing process with pid 1944920 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 1944920 00:05:34.613 12:07:02 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 1944920 00:05:34.872 [2024-05-15 12:07:03.180748] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.872 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:34.872 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:34.872 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:34.872 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:34.872 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:34.872 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:34.872 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:34.872 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:35.130 00:05:35.130 real 0m5.349s 00:05:35.130 user 0m11.215s 00:05:35.130 sys 0m0.422s 00:05:35.130 12:07:03 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.130 12:07:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.130 ************************************ 00:05:35.130 END TEST event_scheduler 00:05:35.130 ************************************ 00:05:35.130 12:07:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:35.130 12:07:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:35.130 12:07:03 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:35.130 12:07:03 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.130 12:07:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.130 ************************************ 00:05:35.130 START TEST app_repeat 00:05:35.130 ************************************ 00:05:35.130 12:07:03 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1945856 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1945856' 00:05:35.130 Process app_repeat pid: 1945856 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:35.130 spdk_app_start Round 0 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1945856 /var/tmp/spdk-nbd.sock 00:05:35.130 12:07:03 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1945856 ']' 00:05:35.130 12:07:03 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.130 12:07:03 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:35.130 12:07:03 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.130 12:07:03 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:35.130 12:07:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.130 12:07:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:35.130 [2024-05-15 12:07:03.567914] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:35.130 [2024-05-15 12:07:03.567970] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1945856 ] 00:05:35.130 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.130 [2024-05-15 12:07:03.636830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.389 [2024-05-15 12:07:03.713723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.389 [2024-05-15 12:07:03.713726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.958 12:07:04 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:35.958 12:07:04 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:35.958 12:07:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.217 Malloc0 00:05:36.217 12:07:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.218 Malloc1 00:05:36.218 12:07:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.218 12:07:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.477 /dev/nbd0 00:05:36.477 12:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.477 12:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.477 1+0 records in 00:05:36.477 1+0 records out 00:05:36.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260203 s, 15.7 MB/s 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:36.477 12:07:04 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:36.477 12:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.477 12:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.477 12:07:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.737 /dev/nbd1 00:05:36.737 12:07:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.737 12:07:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.737 1+0 records in 00:05:36.737 1+0 records out 00:05:36.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256933 s, 15.9 MB/s 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:36.737 12:07:05 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:36.737 12:07:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.737 12:07:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.737 12:07:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.737 12:07:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.737 12:07:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.996 { 00:05:36.996 "nbd_device": "/dev/nbd0", 00:05:36.996 "bdev_name": "Malloc0" 00:05:36.996 }, 00:05:36.996 { 00:05:36.996 "nbd_device": "/dev/nbd1", 00:05:36.996 "bdev_name": "Malloc1" 00:05:36.996 } 00:05:36.996 ]' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.996 { 00:05:36.996 "nbd_device": "/dev/nbd0", 00:05:36.996 "bdev_name": "Malloc0" 00:05:36.996 }, 00:05:36.996 { 00:05:36.996 "nbd_device": "/dev/nbd1", 00:05:36.996 "bdev_name": "Malloc1" 00:05:36.996 } 00:05:36.996 ]' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.996 /dev/nbd1' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.996 /dev/nbd1' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.996 256+0 records in 00:05:36.996 256+0 records out 00:05:36.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112085 s, 93.6 MB/s 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.996 256+0 records in 00:05:36.996 256+0 records out 00:05:36.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196167 s, 53.5 MB/s 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.996 256+0 records in 00:05:36.996 256+0 records out 00:05:36.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179335 s, 58.5 MB/s 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.996 12:07:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.997 12:07:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.256 12:07:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.515 12:07:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.515 12:07:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.515 12:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.515 12:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.775 12:07:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.775 12:07:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.775 12:07:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.035 [2024-05-15 12:07:06.478103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.035 [2024-05-15 12:07:06.543662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.035 [2024-05-15 12:07:06.543665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.295 [2024-05-15 12:07:06.585239] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.295 [2024-05-15 12:07:06.585280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.830 12:07:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.830 12:07:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.830 spdk_app_start Round 1 00:05:40.830 12:07:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1945856 /var/tmp/spdk-nbd.sock 00:05:40.830 12:07:09 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1945856 ']' 00:05:40.830 12:07:09 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.830 12:07:09 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:40.830 12:07:09 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.830 12:07:09 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:40.830 12:07:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.091 12:07:09 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:41.091 12:07:09 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:41.091 12:07:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.091 Malloc0 00:05:41.412 12:07:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.412 Malloc1 00:05:41.412 12:07:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.412 12:07:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.672 /dev/nbd0 00:05:41.672 12:07:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.672 12:07:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:41.672 12:07:09 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.672 1+0 records in 00:05:41.672 1+0 records out 00:05:41.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252264 s, 16.2 MB/s 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:41.672 12:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.672 12:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.672 12:07:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.672 /dev/nbd1 00:05:41.672 12:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.672 12:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:41.672 12:07:10 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.932 1+0 records in 00:05:41.932 1+0 records out 00:05:41.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255677 s, 16.0 MB/s 00:05:41.932 12:07:10 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.932 12:07:10 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:41.932 12:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.932 12:07:10 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:41.932 12:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.932 { 00:05:41.932 "nbd_device": "/dev/nbd0", 00:05:41.932 "bdev_name": "Malloc0" 00:05:41.932 }, 00:05:41.932 { 00:05:41.932 "nbd_device": "/dev/nbd1", 00:05:41.932 "bdev_name": "Malloc1" 00:05:41.932 } 00:05:41.932 ]' 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.932 { 00:05:41.932 "nbd_device": "/dev/nbd0", 00:05:41.932 "bdev_name": "Malloc0" 00:05:41.932 }, 00:05:41.932 { 00:05:41.932 "nbd_device": "/dev/nbd1", 00:05:41.932 "bdev_name": "Malloc1" 00:05:41.932 } 00:05:41.932 ]' 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.932 /dev/nbd1' 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.932 /dev/nbd1' 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.932 256+0 records in 00:05:41.932 256+0 records out 00:05:41.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00341873 s, 307 MB/s 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.932 12:07:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.191 256+0 records in 00:05:42.191 256+0 records out 00:05:42.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162497 s, 64.5 MB/s 00:05:42.191 12:07:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.191 12:07:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.191 256+0 records in 00:05:42.192 256+0 records out 00:05:42.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207872 s, 50.4 MB/s 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.192 12:07:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.451 12:07:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.452 12:07:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.452 12:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.710 12:07:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.711 12:07:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.970 12:07:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.230 [2024-05-15 12:07:11.511583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.230 [2024-05-15 12:07:11.575400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.230 [2024-05-15 12:07:11.575403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.230 [2024-05-15 12:07:11.617643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.230 [2024-05-15 12:07:11.617685] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.523 12:07:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.523 12:07:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.523 spdk_app_start Round 2 00:05:46.523 12:07:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1945856 /var/tmp/spdk-nbd.sock 00:05:46.523 12:07:14 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1945856 ']' 00:05:46.523 12:07:14 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.523 12:07:14 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:46.523 12:07:14 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.523 12:07:14 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:46.523 12:07:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.523 12:07:14 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:46.523 12:07:14 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:46.523 12:07:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.523 Malloc0 00:05:46.523 12:07:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.523 Malloc1 00:05:46.523 12:07:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.523 12:07:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.523 /dev/nbd0 00:05:46.523 12:07:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.523 12:07:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.523 1+0 records in 00:05:46.523 1+0 records out 00:05:46.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249851 s, 16.4 MB/s 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:46.523 12:07:15 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.782 /dev/nbd1 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.782 1+0 records in 00:05:46.782 1+0 records out 00:05:46.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220435 s, 18.6 MB/s 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:46.782 12:07:15 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.782 12:07:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.040 12:07:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.040 { 00:05:47.040 "nbd_device": "/dev/nbd0", 00:05:47.040 "bdev_name": "Malloc0" 00:05:47.040 }, 00:05:47.040 { 00:05:47.040 "nbd_device": "/dev/nbd1", 00:05:47.040 "bdev_name": "Malloc1" 00:05:47.040 } 00:05:47.040 ]' 00:05:47.040 12:07:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.040 { 00:05:47.040 "nbd_device": "/dev/nbd0", 00:05:47.040 "bdev_name": "Malloc0" 00:05:47.040 }, 00:05:47.040 { 00:05:47.040 "nbd_device": "/dev/nbd1", 00:05:47.040 "bdev_name": "Malloc1" 00:05:47.040 } 00:05:47.040 ]' 00:05:47.040 12:07:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.040 12:07:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.040 /dev/nbd1' 00:05:47.040 12:07:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.040 /dev/nbd1' 00:05:47.040 12:07:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.040 12:07:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.041 256+0 records in 00:05:47.041 256+0 records out 00:05:47.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112164 s, 93.5 MB/s 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.041 256+0 records in 00:05:47.041 256+0 records out 00:05:47.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198681 s, 52.8 MB/s 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.041 256+0 records in 00:05:47.041 256+0 records out 00:05:47.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180463 s, 58.1 MB/s 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.041 12:07:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.300 12:07:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.559 12:07:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.818 12:07:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.818 12:07:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.078 12:07:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.078 [2024-05-15 12:07:16.579118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.337 [2024-05-15 12:07:16.642680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.337 [2024-05-15 12:07:16.642684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.337 [2024-05-15 12:07:16.684092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.337 [2024-05-15 12:07:16.684133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.871 12:07:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1945856 /var/tmp/spdk-nbd.sock 00:05:50.871 12:07:19 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1945856 ']' 00:05:50.871 12:07:19 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.871 12:07:19 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:50.871 12:07:19 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.871 12:07:19 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:50.871 12:07:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:51.131 12:07:19 event.app_repeat -- event/event.sh@39 -- # killprocess 1945856 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 1945856 ']' 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 1945856 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1945856 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1945856' 00:05:51.131 killing process with pid 1945856 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@966 -- # kill 1945856 00:05:51.131 12:07:19 event.app_repeat -- common/autotest_common.sh@971 -- # wait 1945856 00:05:51.391 spdk_app_start is called in Round 0. 00:05:51.391 Shutdown signal received, stop current app iteration 00:05:51.391 Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 reinitialization... 00:05:51.391 spdk_app_start is called in Round 1. 00:05:51.391 Shutdown signal received, stop current app iteration 00:05:51.391 Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 reinitialization... 00:05:51.391 spdk_app_start is called in Round 2. 00:05:51.391 Shutdown signal received, stop current app iteration 00:05:51.391 Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 reinitialization... 00:05:51.391 spdk_app_start is called in Round 3. 00:05:51.391 Shutdown signal received, stop current app iteration 00:05:51.391 12:07:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.391 12:07:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.391 00:05:51.391 real 0m16.248s 00:05:51.391 user 0m34.482s 00:05:51.391 sys 0m2.984s 00:05:51.391 12:07:19 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:51.391 12:07:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.391 ************************************ 00:05:51.391 END TEST app_repeat 00:05:51.391 ************************************ 00:05:51.391 12:07:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.391 12:07:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.391 12:07:19 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:51.391 12:07:19 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.391 12:07:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.391 ************************************ 00:05:51.391 START TEST cpu_locks 00:05:51.391 ************************************ 00:05:51.391 12:07:19 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.651 * Looking for test storage... 00:05:51.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.651 12:07:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.651 12:07:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.651 12:07:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.651 12:07:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.651 12:07:19 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:51.651 12:07:19 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.651 12:07:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.651 ************************************ 00:05:51.651 START TEST default_locks 00:05:51.651 ************************************ 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1948934 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1948934 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 1948934 ']' 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:51.651 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.651 [2024-05-15 12:07:20.068156] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:51.651 [2024-05-15 12:07:20.068220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948934 ] 00:05:51.651 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.651 [2024-05-15 12:07:20.139534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.911 [2024-05-15 12:07:20.211515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.479 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:52.479 12:07:20 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:05:52.479 12:07:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1948934 00:05:52.480 12:07:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1948934 00:05:52.480 12:07:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.047 lslocks: write error 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1948934 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 1948934 ']' 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 1948934 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1948934 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1948934' 00:05:53.047 killing process with pid 1948934 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 1948934 00:05:53.047 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 1948934 00:05:53.614 12:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1948934 00:05:53.614 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1948934 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1948934 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 1948934 ']' 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1948934) - No such process 00:05:53.615 ERROR: process (pid: 1948934) is no longer running 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.615 00:05:53.615 real 0m1.897s 00:05:53.615 user 0m1.983s 00:05:53.615 sys 0m0.714s 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.615 12:07:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.615 ************************************ 00:05:53.615 END TEST default_locks 00:05:53.615 ************************************ 00:05:53.615 12:07:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.615 12:07:21 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:53.615 12:07:21 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.615 12:07:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.615 ************************************ 00:05:53.615 START TEST default_locks_via_rpc 00:05:53.615 ************************************ 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1949366 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1949366 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1949366 ']' 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:53.615 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.615 [2024-05-15 12:07:22.057218] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:53.615 [2024-05-15 12:07:22.057263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949366 ] 00:05:53.615 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.615 [2024-05-15 12:07:22.125219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.874 [2024-05-15 12:07:22.200106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1949366 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1949366 00:05:54.443 12:07:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1949366 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 1949366 ']' 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 1949366 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1949366 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1949366' 00:05:55.011 killing process with pid 1949366 00:05:55.011 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 1949366 00:05:55.012 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 1949366 00:05:55.271 00:05:55.272 real 0m1.656s 00:05:55.272 user 0m1.732s 00:05:55.272 sys 0m0.555s 00:05:55.272 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:55.272 12:07:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.272 ************************************ 00:05:55.272 END TEST default_locks_via_rpc 00:05:55.272 ************************************ 00:05:55.272 12:07:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:55.272 12:07:23 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:55.272 12:07:23 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:55.272 12:07:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.272 ************************************ 00:05:55.272 START TEST non_locking_app_on_locked_coremask 00:05:55.272 ************************************ 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1949765 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1949765 /var/tmp/spdk.sock 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1949765 ']' 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:55.272 12:07:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.531 [2024-05-15 12:07:23.803597] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:55.531 [2024-05-15 12:07:23.803646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949765 ] 00:05:55.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.531 [2024-05-15 12:07:23.872466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.531 [2024-05-15 12:07:23.939689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1949807 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1949807 /var/tmp/spdk2.sock 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1949807 ']' 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:56.178 12:07:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.178 [2024-05-15 12:07:24.630618] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:56.178 [2024-05-15 12:07:24.630672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949807 ] 00:05:56.178 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.464 [2024-05-15 12:07:24.730576] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.464 [2024-05-15 12:07:24.730604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.464 [2024-05-15 12:07:24.869764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.032 12:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:57.032 12:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:57.032 12:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1949765 00:05:57.032 12:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1949765 00:05:57.032 12:07:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.410 lslocks: write error 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1949765 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1949765 ']' 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1949765 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1949765 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1949765' 00:05:58.410 killing process with pid 1949765 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1949765 00:05:58.410 12:07:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1949765 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1949807 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1949807 ']' 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1949807 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1949807 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1949807' 00:05:58.979 killing process with pid 1949807 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1949807 00:05:58.979 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1949807 00:05:59.239 00:05:59.239 real 0m3.880s 00:05:59.239 user 0m4.124s 00:05:59.239 sys 0m1.305s 00:05:59.239 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:59.239 12:07:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 ************************************ 00:05:59.239 END TEST non_locking_app_on_locked_coremask 00:05:59.239 ************************************ 00:05:59.239 12:07:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.239 12:07:27 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:59.239 12:07:27 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:59.239 12:07:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 ************************************ 00:05:59.239 START TEST locking_app_on_unlocked_coremask 00:05:59.239 ************************************ 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1950374 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1950374 /var/tmp/spdk.sock 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1950374 ']' 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 12:07:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.239 [2024-05-15 12:07:27.759350] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:05:59.239 [2024-05-15 12:07:27.759392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950374 ] 00:05:59.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.498 [2024-05-15 12:07:27.827755] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.498 [2024-05-15 12:07:27.827776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.498 [2024-05-15 12:07:27.901235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.066 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:00.066 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:00.066 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1950632 00:06:00.066 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1950632 /var/tmp/spdk2.sock 00:06:00.066 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.067 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1950632 ']' 00:06:00.067 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.067 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:00.067 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.067 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:00.067 12:07:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.067 [2024-05-15 12:07:28.570662] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:00.067 [2024-05-15 12:07:28.570717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950632 ] 00:06:00.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.325 [2024-05-15 12:07:28.664815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.325 [2024-05-15 12:07:28.809094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.893 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:00.893 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:00.893 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1950632 00:06:00.893 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1950632 00:06:00.893 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.461 lslocks: write error 00:06:01.461 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1950374 00:06:01.461 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1950374 ']' 00:06:01.461 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 1950374 00:06:01.461 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:01.461 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:01.461 12:07:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1950374 00:06:01.720 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:01.720 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:01.721 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1950374' 00:06:01.721 killing process with pid 1950374 00:06:01.721 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 1950374 00:06:01.721 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 1950374 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1950632 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1950632 ']' 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 1950632 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1950632 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1950632' 00:06:02.289 killing process with pid 1950632 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 1950632 00:06:02.289 12:07:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 1950632 00:06:02.548 00:06:02.548 real 0m3.369s 00:06:02.548 user 0m3.570s 00:06:02.548 sys 0m1.024s 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.808 ************************************ 00:06:02.808 END TEST locking_app_on_unlocked_coremask 00:06:02.808 ************************************ 00:06:02.808 12:07:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.808 12:07:31 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:02.808 12:07:31 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:02.808 12:07:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.808 ************************************ 00:06:02.808 START TEST locking_app_on_locked_coremask 00:06:02.808 ************************************ 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1951043 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1951043 /var/tmp/spdk.sock 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1951043 ']' 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:02.808 12:07:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.808 [2024-05-15 12:07:31.228254] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:02.808 [2024-05-15 12:07:31.228305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951043 ] 00:06:02.808 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.808 [2024-05-15 12:07:31.299114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.067 [2024-05-15 12:07:31.368414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1951215 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1951215 /var/tmp/spdk2.sock 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1951215 /var/tmp/spdk2.sock 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1951215 /var/tmp/spdk2.sock 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1951215 ']' 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:03.635 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.635 [2024-05-15 12:07:32.068504] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:03.635 [2024-05-15 12:07:32.068553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951215 ] 00:06:03.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.635 [2024-05-15 12:07:32.162351] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1951043 has claimed it. 00:06:03.635 [2024-05-15 12:07:32.162392] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1951215) - No such process 00:06:04.204 ERROR: process (pid: 1951215) is no longer running 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1951043 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1951043 00:06:04.204 12:07:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.771 lslocks: write error 00:06:04.771 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1951043 00:06:04.771 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1951043 ']' 00:06:04.771 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1951043 00:06:04.771 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:04.771 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:04.771 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1951043 00:06:05.030 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:05.030 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:05.030 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1951043' 00:06:05.030 killing process with pid 1951043 00:06:05.030 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1951043 00:06:05.030 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1951043 00:06:05.290 00:06:05.290 real 0m2.464s 00:06:05.290 user 0m2.660s 00:06:05.290 sys 0m0.758s 00:06:05.290 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.290 12:07:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.290 ************************************ 00:06:05.290 END TEST locking_app_on_locked_coremask 00:06:05.290 ************************************ 00:06:05.290 12:07:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:05.290 12:07:33 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:05.290 12:07:33 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.290 12:07:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.290 ************************************ 00:06:05.290 START TEST locking_overlapped_coremask 00:06:05.290 ************************************ 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1951515 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1951515 /var/tmp/spdk.sock 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 1951515 ']' 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:05.290 12:07:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.290 [2024-05-15 12:07:33.778671] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:05.290 [2024-05-15 12:07:33.778716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951515 ] 00:06:05.290 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.550 [2024-05-15 12:07:33.847812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.550 [2024-05-15 12:07:33.917968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.550 [2024-05-15 12:07:33.918062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.550 [2024-05-15 12:07:33.918064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1951773 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1951773 /var/tmp/spdk2.sock 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1951773 /var/tmp/spdk2.sock 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1951773 /var/tmp/spdk2.sock 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 1951773 ']' 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:06.121 12:07:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.121 [2024-05-15 12:07:34.617715] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:06.121 [2024-05-15 12:07:34.617765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951773 ] 00:06:06.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.380 [2024-05-15 12:07:34.717686] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1951515 has claimed it. 00:06:06.380 [2024-05-15 12:07:34.717728] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1951773) - No such process 00:06:06.950 ERROR: process (pid: 1951773) is no longer running 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1951515 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 1951515 ']' 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 1951515 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1951515 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1951515' 00:06:06.950 killing process with pid 1951515 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 1951515 00:06:06.950 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 1951515 00:06:07.209 00:06:07.209 real 0m1.909s 00:06:07.209 user 0m5.288s 00:06:07.209 sys 0m0.462s 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.209 ************************************ 00:06:07.209 END TEST locking_overlapped_coremask 00:06:07.209 ************************************ 00:06:07.209 12:07:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.209 12:07:35 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:07.209 12:07:35 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:07.209 12:07:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.209 ************************************ 00:06:07.209 START TEST locking_overlapped_coremask_via_rpc 00:06:07.209 ************************************ 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1951908 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1951908 /var/tmp/spdk.sock 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1951908 ']' 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:07.209 12:07:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.469 [2024-05-15 12:07:35.778093] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:07.469 [2024-05-15 12:07:35.778140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951908 ] 00:06:07.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.469 [2024-05-15 12:07:35.847676] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.469 [2024-05-15 12:07:35.847697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.469 [2024-05-15 12:07:35.918304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.469 [2024-05-15 12:07:35.918399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.469 [2024-05-15 12:07:35.918401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.038 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:08.038 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:08.038 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.298 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1952083 00:06:08.298 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1952083 /var/tmp/spdk2.sock 00:06:08.298 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1952083 ']' 00:06:08.298 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.298 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:08.298 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.298 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:08.298 12:07:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.298 [2024-05-15 12:07:36.621060] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:08.298 [2024-05-15 12:07:36.621112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1952083 ] 00:06:08.298 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.298 [2024-05-15 12:07:36.717731] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.298 [2024-05-15 12:07:36.717760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.558 [2024-05-15 12:07:36.861650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.558 [2024-05-15 12:07:36.861771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.558 [2024-05-15 12:07:36.861771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.127 [2024-05-15 12:07:37.424264] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1951908 has claimed it. 00:06:09.127 request: 00:06:09.127 { 00:06:09.127 "method": "framework_enable_cpumask_locks", 00:06:09.127 "req_id": 1 00:06:09.127 } 00:06:09.127 Got JSON-RPC error response 00:06:09.127 response: 00:06:09.127 { 00:06:09.127 "code": -32603, 00:06:09.127 "message": "Failed to claim CPU core: 2" 00:06:09.127 } 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1951908 /var/tmp/spdk.sock 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1951908 ']' 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1952083 /var/tmp/spdk2.sock 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1952083 ']' 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:09.127 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.386 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:09.386 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:09.386 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:09.386 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.386 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.386 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.386 00:06:09.386 real 0m2.082s 00:06:09.386 user 0m0.822s 00:06:09.386 sys 0m0.193s 00:06:09.386 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:09.386 12:07:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.386 ************************************ 00:06:09.386 END TEST locking_overlapped_coremask_via_rpc 00:06:09.386 ************************************ 00:06:09.386 12:07:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:09.386 12:07:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1951908 ]] 00:06:09.386 12:07:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1951908 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1951908 ']' 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1951908 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1951908 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1951908' 00:06:09.386 killing process with pid 1951908 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 1951908 00:06:09.386 12:07:37 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 1951908 00:06:09.996 12:07:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1952083 ]] 00:06:09.996 12:07:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1952083 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1952083 ']' 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1952083 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1952083 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1952083' 00:06:09.996 killing process with pid 1952083 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 1952083 00:06:09.996 12:07:38 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 1952083 00:06:10.257 12:07:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.257 12:07:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:10.257 12:07:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1951908 ]] 00:06:10.257 12:07:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1951908 00:06:10.257 12:07:38 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1951908 ']' 00:06:10.257 12:07:38 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1951908 00:06:10.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (1951908) - No such process 00:06:10.257 12:07:38 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 1951908 is not found' 00:06:10.257 Process with pid 1951908 is not found 00:06:10.257 12:07:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1952083 ]] 00:06:10.257 12:07:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1952083 00:06:10.257 12:07:38 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1952083 ']' 00:06:10.257 12:07:38 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1952083 00:06:10.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (1952083) - No such process 00:06:10.257 12:07:38 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 1952083 is not found' 00:06:10.257 Process with pid 1952083 is not found 00:06:10.257 12:07:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.257 00:06:10.257 real 0m18.785s 00:06:10.257 user 0m30.834s 00:06:10.257 sys 0m6.114s 00:06:10.257 12:07:38 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:10.257 12:07:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.257 ************************************ 00:06:10.257 END TEST cpu_locks 00:06:10.257 ************************************ 00:06:10.257 00:06:10.257 real 0m44.781s 00:06:10.257 user 1m23.214s 00:06:10.257 sys 0m10.233s 00:06:10.257 12:07:38 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:10.257 12:07:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.257 ************************************ 00:06:10.257 END TEST event 00:06:10.257 ************************************ 00:06:10.257 12:07:38 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.257 12:07:38 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:10.257 12:07:38 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:10.257 12:07:38 -- common/autotest_common.sh@10 -- # set +x 00:06:10.517 ************************************ 00:06:10.517 START TEST thread 00:06:10.517 ************************************ 00:06:10.517 12:07:38 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.517 * Looking for test storage... 00:06:10.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:10.517 12:07:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.517 12:07:38 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:10.517 12:07:38 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:10.517 12:07:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.517 ************************************ 00:06:10.517 START TEST thread_poller_perf 00:06:10.517 ************************************ 00:06:10.517 12:07:38 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.517 [2024-05-15 12:07:38.976007] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:10.517 [2024-05-15 12:07:38.976085] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1952656 ] 00:06:10.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.777 [2024-05-15 12:07:39.050211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.777 [2024-05-15 12:07:39.119670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.777 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.716 ====================================== 00:06:11.716 busy:2508788938 (cyc) 00:06:11.716 total_run_count: 432000 00:06:11.716 tsc_hz: 2500000000 (cyc) 00:06:11.716 ====================================== 00:06:11.716 poller_cost: 5807 (cyc), 2322 (nsec) 00:06:11.716 00:06:11.716 real 0m1.262s 00:06:11.716 user 0m1.173s 00:06:11.716 sys 0m0.086s 00:06:11.716 12:07:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:11.716 12:07:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.716 ************************************ 00:06:11.716 END TEST thread_poller_perf 00:06:11.716 ************************************ 00:06:11.976 12:07:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.976 12:07:40 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:11.976 12:07:40 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:11.976 12:07:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.976 ************************************ 00:06:11.976 START TEST thread_poller_perf 00:06:11.976 ************************************ 00:06:11.976 12:07:40 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.976 [2024-05-15 12:07:40.327957] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:11.976 [2024-05-15 12:07:40.328039] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1952855 ] 00:06:11.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.976 [2024-05-15 12:07:40.402545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.976 [2024-05-15 12:07:40.472458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.976 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.356 ====================================== 00:06:13.356 busy:2501942162 (cyc) 00:06:13.356 total_run_count: 5649000 00:06:13.356 tsc_hz: 2500000000 (cyc) 00:06:13.356 ====================================== 00:06:13.356 poller_cost: 442 (cyc), 176 (nsec) 00:06:13.356 00:06:13.356 real 0m1.251s 00:06:13.356 user 0m1.156s 00:06:13.356 sys 0m0.091s 00:06:13.356 12:07:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:13.356 12:07:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 ************************************ 00:06:13.356 END TEST thread_poller_perf 00:06:13.356 ************************************ 00:06:13.356 12:07:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.356 00:06:13.356 real 0m2.805s 00:06:13.356 user 0m2.447s 00:06:13.356 sys 0m0.364s 00:06:13.356 12:07:41 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:13.356 12:07:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 ************************************ 00:06:13.356 END TEST thread 00:06:13.356 ************************************ 00:06:13.356 12:07:41 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:13.356 12:07:41 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:13.356 12:07:41 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:13.356 12:07:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 ************************************ 00:06:13.356 START TEST accel 00:06:13.356 ************************************ 00:06:13.356 12:07:41 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:13.356 * Looking for test storage... 00:06:13.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:13.356 12:07:41 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:13.356 12:07:41 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:13.356 12:07:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.356 12:07:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1953151 00:06:13.356 12:07:41 accel -- accel/accel.sh@63 -- # waitforlisten 1953151 00:06:13.356 12:07:41 accel -- common/autotest_common.sh@828 -- # '[' -z 1953151 ']' 00:06:13.356 12:07:41 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.356 12:07:41 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:13.356 12:07:41 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:13.356 12:07:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:13.356 12:07:41 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.356 12:07:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.356 12:07:41 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:13.356 12:07:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.356 12:07:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.356 12:07:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.356 12:07:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.356 12:07:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.356 12:07:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:13.356 12:07:41 accel -- accel/accel.sh@41 -- # jq -r . 00:06:13.356 [2024-05-15 12:07:41.836168] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:13.356 [2024-05-15 12:07:41.836227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953151 ] 00:06:13.356 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.616 [2024-05-15 12:07:41.905244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.616 [2024-05-15 12:07:41.975742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.185 12:07:42 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:14.185 12:07:42 accel -- common/autotest_common.sh@861 -- # return 0 00:06:14.185 12:07:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:14.185 12:07:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:14.185 12:07:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:14.185 12:07:42 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:14.185 12:07:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:14.185 12:07:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:14.185 12:07:42 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:14.185 12:07:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:14.185 12:07:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.185 12:07:42 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.185 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.185 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.185 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.186 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.186 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.186 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.186 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.186 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.186 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.186 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.186 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.186 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.186 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.186 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.186 12:07:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.186 12:07:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.186 12:07:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.186 12:07:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.186 12:07:42 accel -- accel/accel.sh@75 -- # killprocess 1953151 00:06:14.186 12:07:42 accel -- common/autotest_common.sh@947 -- # '[' -z 1953151 ']' 00:06:14.186 12:07:42 accel -- common/autotest_common.sh@951 -- # kill -0 1953151 00:06:14.186 12:07:42 accel -- common/autotest_common.sh@952 -- # uname 00:06:14.186 12:07:42 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:14.186 12:07:42 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1953151 00:06:14.445 12:07:42 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:14.445 12:07:42 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:14.446 12:07:42 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1953151' 00:06:14.446 killing process with pid 1953151 00:06:14.446 12:07:42 accel -- common/autotest_common.sh@966 -- # kill 1953151 00:06:14.446 12:07:42 accel -- common/autotest_common.sh@971 -- # wait 1953151 00:06:14.706 12:07:43 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:14.706 12:07:43 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:14.706 12:07:43 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:14.706 12:07:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.706 12:07:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.706 12:07:43 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:14.706 12:07:43 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:14.706 12:07:43 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:14.706 12:07:43 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:14.706 12:07:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:14.706 12:07:43 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:14.706 12:07:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:14.706 12:07:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.706 ************************************ 00:06:14.706 START TEST accel_missing_filename 00:06:14.706 ************************************ 00:06:14.706 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:06:14.706 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:14.706 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:14.706 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:14.965 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.965 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:14.965 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.965 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:14.965 12:07:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:14.965 [2024-05-15 12:07:43.266049] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:14.965 [2024-05-15 12:07:43.266112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953426 ] 00:06:14.965 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.965 [2024-05-15 12:07:43.340848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.965 [2024-05-15 12:07:43.413160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.965 [2024-05-15 12:07:43.453758] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.225 [2024-05-15 12:07:43.512609] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:15.225 A filename is required. 00:06:15.225 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:15.225 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.225 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:15.225 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:15.225 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:15.225 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.225 00:06:15.225 real 0m0.369s 00:06:15.225 user 0m0.264s 00:06:15.225 sys 0m0.144s 00:06:15.225 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.225 12:07:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:15.225 ************************************ 00:06:15.225 END TEST accel_missing_filename 00:06:15.225 ************************************ 00:06:15.225 12:07:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.225 12:07:43 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:06:15.225 12:07:43 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.225 12:07:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.225 ************************************ 00:06:15.225 START TEST accel_compress_verify 00:06:15.225 ************************************ 00:06:15.225 12:07:43 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.225 12:07:43 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:15.225 12:07:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.225 12:07:43 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:15.225 12:07:43 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.225 12:07:43 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:15.225 12:07:43 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.225 12:07:43 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:15.225 12:07:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:15.225 [2024-05-15 12:07:43.722769] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:15.225 [2024-05-15 12:07:43.722834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953647 ] 00:06:15.485 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.485 [2024-05-15 12:07:43.794498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.485 [2024-05-15 12:07:43.862911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.485 [2024-05-15 12:07:43.903859] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.485 [2024-05-15 12:07:43.964089] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:15.745 00:06:15.745 Compression does not support the verify option, aborting. 00:06:15.745 12:07:44 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:15.745 12:07:44 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.745 12:07:44 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:15.745 12:07:44 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:15.745 12:07:44 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:15.745 12:07:44 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.745 00:06:15.745 real 0m0.362s 00:06:15.745 user 0m0.263s 00:06:15.745 sys 0m0.136s 00:06:15.745 12:07:44 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.745 12:07:44 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:15.745 ************************************ 00:06:15.745 END TEST accel_compress_verify 00:06:15.745 ************************************ 00:06:15.745 12:07:44 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:15.745 12:07:44 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:15.745 12:07:44 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.745 12:07:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.745 ************************************ 00:06:15.745 START TEST accel_wrong_workload 00:06:15.745 ************************************ 00:06:15.745 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:06:15.745 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:15.745 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:15.745 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:15.745 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.745 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:15.745 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.745 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:15.745 12:07:44 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:15.745 Unsupported workload type: foobar 00:06:15.746 [2024-05-15 12:07:44.173005] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:15.746 accel_perf options: 00:06:15.746 [-h help message] 00:06:15.746 [-q queue depth per core] 00:06:15.746 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.746 [-T number of threads per core 00:06:15.746 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.746 [-t time in seconds] 00:06:15.746 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.746 [ dif_verify, , dif_generate, dif_generate_copy 00:06:15.746 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.746 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.746 [-S for crc32c workload, use this seed value (default 0) 00:06:15.746 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.746 [-f for fill workload, use this BYTE value (default 255) 00:06:15.746 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.746 [-y verify result if this switch is on] 00:06:15.746 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.746 Can be used to spread operations across a wider range of memory. 00:06:15.746 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:15.746 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.746 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.746 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.746 00:06:15.746 real 0m0.036s 00:06:15.746 user 0m0.019s 00:06:15.746 sys 0m0.016s 00:06:15.746 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:15.746 12:07:44 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:15.746 ************************************ 00:06:15.746 END TEST accel_wrong_workload 00:06:15.746 ************************************ 00:06:15.746 Error: writing output failed: Broken pipe 00:06:15.746 12:07:44 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.746 12:07:44 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:06:15.746 12:07:44 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.746 12:07:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.746 ************************************ 00:06:15.746 START TEST accel_negative_buffers 00:06:15.746 ************************************ 00:06:15.746 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.746 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:15.746 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:15.746 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:15.746 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.746 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:15.746 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.746 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:15.746 12:07:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:16.007 -x option must be non-negative. 00:06:16.007 [2024-05-15 12:07:44.296610] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:16.007 accel_perf options: 00:06:16.007 [-h help message] 00:06:16.007 [-q queue depth per core] 00:06:16.007 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:16.007 [-T number of threads per core 00:06:16.007 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:16.007 [-t time in seconds] 00:06:16.007 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:16.007 [ dif_verify, , dif_generate, dif_generate_copy 00:06:16.007 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:16.007 [-l for compress/decompress workloads, name of uncompressed input file 00:06:16.007 [-S for crc32c workload, use this seed value (default 0) 00:06:16.007 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:16.007 [-f for fill workload, use this BYTE value (default 255) 00:06:16.007 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:16.007 [-y verify result if this switch is on] 00:06:16.007 [-a tasks to allocate per core (default: same value as -q)] 00:06:16.007 Can be used to spread operations across a wider range of memory. 00:06:16.007 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:16.007 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:16.007 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:16.007 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:16.007 00:06:16.007 real 0m0.038s 00:06:16.007 user 0m0.016s 00:06:16.007 sys 0m0.022s 00:06:16.007 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:16.007 12:07:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 ************************************ 00:06:16.007 END TEST accel_negative_buffers 00:06:16.007 ************************************ 00:06:16.007 Error: writing output failed: Broken pipe 00:06:16.007 12:07:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:16.007 12:07:44 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:16.007 12:07:44 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:16.007 12:07:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.007 ************************************ 00:06:16.007 START TEST accel_crc32c 00:06:16.007 ************************************ 00:06:16.007 12:07:44 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:16.007 12:07:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:16.007 [2024-05-15 12:07:44.416509] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:16.007 [2024-05-15 12:07:44.416566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953718 ] 00:06:16.007 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.007 [2024-05-15 12:07:44.488279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.267 [2024-05-15 12:07:44.561216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.267 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:16.268 12:07:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:17.647 12:07:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.647 00:06:17.647 real 0m1.370s 00:06:17.647 user 0m1.249s 00:06:17.647 sys 0m0.135s 00:06:17.647 12:07:45 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:17.647 12:07:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:17.647 ************************************ 00:06:17.647 END TEST accel_crc32c 00:06:17.647 ************************************ 00:06:17.647 12:07:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:17.647 12:07:45 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:17.647 12:07:45 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:17.647 12:07:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.647 ************************************ 00:06:17.647 START TEST accel_crc32c_C2 00:06:17.647 ************************************ 00:06:17.647 12:07:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:17.647 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.647 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.648 12:07:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:17.648 [2024-05-15 12:07:45.873309] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:17.648 [2024-05-15 12:07:45.873362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953999 ] 00:06:17.648 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.648 [2024-05-15 12:07:45.944028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.648 [2024-05-15 12:07:46.012939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.648 12:07:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.027 00:06:19.027 real 0m1.364s 00:06:19.027 user 0m1.243s 00:06:19.027 sys 0m0.135s 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:19.027 12:07:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:19.027 ************************************ 00:06:19.027 END TEST accel_crc32c_C2 00:06:19.027 ************************************ 00:06:19.027 12:07:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:19.027 12:07:47 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:19.027 12:07:47 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:19.027 12:07:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.027 ************************************ 00:06:19.027 START TEST accel_copy 00:06:19.027 ************************************ 00:06:19.027 12:07:47 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:19.027 [2024-05-15 12:07:47.329243] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:19.027 [2024-05-15 12:07:47.329298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954289 ] 00:06:19.027 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.027 [2024-05-15 12:07:47.398860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.027 [2024-05-15 12:07:47.467517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:19.027 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.028 12:07:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:20.401 12:07:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.401 00:06:20.401 real 0m1.366s 00:06:20.401 user 0m1.245s 00:06:20.401 sys 0m0.134s 00:06:20.401 12:07:48 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:20.401 12:07:48 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:20.401 ************************************ 00:06:20.401 END TEST accel_copy 00:06:20.401 ************************************ 00:06:20.401 12:07:48 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.401 12:07:48 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:20.401 12:07:48 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:20.401 12:07:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.401 ************************************ 00:06:20.401 START TEST accel_fill 00:06:20.401 ************************************ 00:06:20.401 12:07:48 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:20.401 12:07:48 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:20.401 [2024-05-15 12:07:48.785560] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:20.401 [2024-05-15 12:07:48.785616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954576 ] 00:06:20.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.401 [2024-05-15 12:07:48.855260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.401 [2024-05-15 12:07:48.923084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.659 12:07:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.595 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:21.596 12:07:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.596 00:06:21.596 real 0m1.369s 00:06:21.596 user 0m1.254s 00:06:21.596 sys 0m0.128s 00:06:21.596 12:07:50 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:21.596 12:07:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:21.596 ************************************ 00:06:21.596 END TEST accel_fill 00:06:21.596 ************************************ 00:06:21.855 12:07:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:21.855 12:07:50 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:21.855 12:07:50 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:21.855 12:07:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.855 ************************************ 00:06:21.855 START TEST accel_copy_crc32c 00:06:21.855 ************************************ 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:21.855 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:21.855 [2024-05-15 12:07:50.227484] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:21.855 [2024-05-15 12:07:50.227552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954861 ] 00:06:21.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.855 [2024-05-15 12:07:50.298520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.855 [2024-05-15 12:07:50.368623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.113 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.114 12:07:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.049 00:06:23.049 real 0m1.367s 00:06:23.049 user 0m1.246s 00:06:23.049 sys 0m0.135s 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:23.049 12:07:51 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:23.049 ************************************ 00:06:23.049 END TEST accel_copy_crc32c 00:06:23.049 ************************************ 00:06:23.309 12:07:51 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.309 12:07:51 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:23.309 12:07:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:23.309 12:07:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.309 ************************************ 00:06:23.309 START TEST accel_copy_crc32c_C2 00:06:23.309 ************************************ 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.309 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:23.309 [2024-05-15 12:07:51.692285] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:23.309 [2024-05-15 12:07:51.692354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1955142 ] 00:06:23.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.309 [2024-05-15 12:07:51.762192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.309 [2024-05-15 12:07:51.831124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.568 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.569 12:07:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.534 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.534 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.534 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.534 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.535 00:06:24.535 real 0m1.363s 00:06:24.535 user 0m1.236s 00:06:24.535 sys 0m0.133s 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:24.535 12:07:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:24.535 ************************************ 00:06:24.535 END TEST accel_copy_crc32c_C2 00:06:24.535 ************************************ 00:06:24.831 12:07:53 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:24.831 12:07:53 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:24.831 12:07:53 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:24.831 12:07:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.831 ************************************ 00:06:24.831 START TEST accel_dualcast 00:06:24.831 ************************************ 00:06:24.831 12:07:53 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:06:24.831 12:07:53 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:24.831 12:07:53 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:24.831 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.831 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:24.832 [2024-05-15 12:07:53.131259] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:24.832 [2024-05-15 12:07:53.131314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1955429 ] 00:06:24.832 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.832 [2024-05-15 12:07:53.202299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.832 [2024-05-15 12:07:53.273312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.832 12:07:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.209 12:07:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.209 12:07:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.209 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.209 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.209 12:07:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.209 12:07:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.209 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:26.210 12:07:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.210 00:06:26.210 real 0m1.362s 00:06:26.210 user 0m1.242s 00:06:26.210 sys 0m0.124s 00:06:26.210 12:07:54 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:26.210 12:07:54 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:26.210 ************************************ 00:06:26.210 END TEST accel_dualcast 00:06:26.210 ************************************ 00:06:26.210 12:07:54 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:26.210 12:07:54 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:26.210 12:07:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:26.210 12:07:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.210 ************************************ 00:06:26.210 START TEST accel_compare 00:06:26.210 ************************************ 00:06:26.210 12:07:54 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:26.210 12:07:54 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:26.210 [2024-05-15 12:07:54.569920] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:26.210 [2024-05-15 12:07:54.569978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1955716 ] 00:06:26.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.210 [2024-05-15 12:07:54.638578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.210 [2024-05-15 12:07:54.706364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.470 12:07:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:27.408 12:07:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.408 00:06:27.408 real 0m1.357s 00:06:27.408 user 0m1.243s 00:06:27.408 sys 0m0.118s 00:06:27.408 12:07:55 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:27.408 12:07:55 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:27.408 ************************************ 00:06:27.408 END TEST accel_compare 00:06:27.408 ************************************ 00:06:27.408 12:07:55 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:27.408 12:07:55 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:27.408 12:07:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:27.408 12:07:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.668 ************************************ 00:06:27.668 START TEST accel_xor 00:06:27.668 ************************************ 00:06:27.668 12:07:55 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:27.668 12:07:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:27.668 [2024-05-15 12:07:56.008858] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:27.669 [2024-05-15 12:07:56.008928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1955996 ] 00:06:27.669 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.669 [2024-05-15 12:07:56.080447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.669 [2024-05-15 12:07:56.148372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.669 12:07:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.049 00:06:29.049 real 0m1.362s 00:06:29.049 user 0m1.242s 00:06:29.049 sys 0m0.124s 00:06:29.049 12:07:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:29.049 12:07:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:29.049 ************************************ 00:06:29.049 END TEST accel_xor 00:06:29.049 ************************************ 00:06:29.049 12:07:57 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:29.049 12:07:57 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:29.049 12:07:57 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:29.049 12:07:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.049 ************************************ 00:06:29.049 START TEST accel_xor 00:06:29.049 ************************************ 00:06:29.049 12:07:57 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:29.049 12:07:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:29.049 [2024-05-15 12:07:57.436172] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:29.049 [2024-05-15 12:07:57.436261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956252 ] 00:06:29.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.049 [2024-05-15 12:07:57.504293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.049 [2024-05-15 12:07:57.572173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.310 12:07:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:30.248 12:07:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.248 00:06:30.248 real 0m1.346s 00:06:30.248 user 0m1.235s 00:06:30.248 sys 0m0.115s 00:06:30.248 12:07:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:30.248 12:07:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:30.248 ************************************ 00:06:30.248 END TEST accel_xor 00:06:30.248 ************************************ 00:06:30.508 12:07:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:30.508 12:07:58 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:30.508 12:07:58 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:30.508 12:07:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.508 ************************************ 00:06:30.508 START TEST accel_dif_verify 00:06:30.508 ************************************ 00:06:30.508 12:07:58 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:30.508 12:07:58 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:30.508 [2024-05-15 12:07:58.876464] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:30.508 [2024-05-15 12:07:58.876521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956479 ] 00:06:30.508 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.508 [2024-05-15 12:07:58.946504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.508 [2024-05-15 12:07:59.014240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.768 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.769 12:07:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:31.706 12:08:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.706 00:06:31.706 real 0m1.364s 00:06:31.706 user 0m1.236s 00:06:31.706 sys 0m0.134s 00:06:31.706 12:08:00 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:31.706 12:08:00 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.706 ************************************ 00:06:31.706 END TEST accel_dif_verify 00:06:31.706 ************************************ 00:06:31.965 12:08:00 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:31.965 12:08:00 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:31.965 12:08:00 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:31.965 12:08:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.965 ************************************ 00:06:31.965 START TEST accel_dif_generate 00:06:31.965 ************************************ 00:06:31.965 12:08:00 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.965 12:08:00 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.966 12:08:00 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:31.966 12:08:00 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:31.966 [2024-05-15 12:08:00.321214] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:31.966 [2024-05-15 12:08:00.321290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956721 ] 00:06:31.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.966 [2024-05-15 12:08:00.393166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.966 [2024-05-15 12:08:00.467223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.225 12:08:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.163 12:08:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:33.164 12:08:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.164 00:06:33.164 real 0m1.367s 00:06:33.164 user 0m1.244s 00:06:33.164 sys 0m0.129s 00:06:33.164 12:08:01 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:33.164 12:08:01 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:33.164 ************************************ 00:06:33.164 END TEST accel_dif_generate 00:06:33.164 ************************************ 00:06:33.164 12:08:01 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:33.164 12:08:01 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:33.164 12:08:01 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:33.164 12:08:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.423 ************************************ 00:06:33.423 START TEST accel_dif_generate_copy 00:06:33.423 ************************************ 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:33.423 [2024-05-15 12:08:01.768394] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:33.423 [2024-05-15 12:08:01.768474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956947 ] 00:06:33.423 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.423 [2024-05-15 12:08:01.838669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.423 [2024-05-15 12:08:01.908702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.423 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.682 12:08:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.620 00:06:34.620 real 0m1.362s 00:06:34.620 user 0m1.250s 00:06:34.620 sys 0m0.116s 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:34.620 12:08:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.620 ************************************ 00:06:34.620 END TEST accel_dif_generate_copy 00:06:34.620 ************************************ 00:06:34.621 12:08:03 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:34.621 12:08:03 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.621 12:08:03 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:34.621 12:08:03 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:34.621 12:08:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.880 ************************************ 00:06:34.880 START TEST accel_comp 00:06:34.880 ************************************ 00:06:34.880 12:08:03 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:34.880 [2024-05-15 12:08:03.197389] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:34.880 [2024-05-15 12:08:03.197447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957182 ] 00:06:34.880 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.880 [2024-05-15 12:08:03.268032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.880 [2024-05-15 12:08:03.341385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.880 12:08:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.258 12:08:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.258 12:08:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.258 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.258 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:36.259 12:08:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.259 00:06:36.259 real 0m1.371s 00:06:36.259 user 0m1.238s 00:06:36.259 sys 0m0.138s 00:06:36.259 12:08:04 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:36.259 12:08:04 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:36.259 ************************************ 00:06:36.259 END TEST accel_comp 00:06:36.259 ************************************ 00:06:36.259 12:08:04 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:36.259 12:08:04 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:36.259 12:08:04 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:36.259 12:08:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.259 ************************************ 00:06:36.259 START TEST accel_decomp 00:06:36.259 ************************************ 00:06:36.259 12:08:04 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:36.259 12:08:04 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:36.259 [2024-05-15 12:08:04.645398] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:36.259 [2024-05-15 12:08:04.645455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957466 ] 00:06:36.259 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.259 [2024-05-15 12:08:04.715853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.259 [2024-05-15 12:08:04.785610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.519 12:08:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.457 12:08:05 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.457 00:06:37.457 real 0m1.364s 00:06:37.457 user 0m1.231s 00:06:37.457 sys 0m0.137s 00:06:37.457 12:08:05 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:37.457 12:08:05 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:37.457 ************************************ 00:06:37.457 END TEST accel_decomp 00:06:37.457 ************************************ 00:06:37.717 12:08:06 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.717 12:08:06 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:37.717 12:08:06 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:37.717 12:08:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.717 ************************************ 00:06:37.717 START TEST accel_decmop_full 00:06:37.717 ************************************ 00:06:37.717 12:08:06 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:37.717 12:08:06 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:37.717 [2024-05-15 12:08:06.090831] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:37.717 [2024-05-15 12:08:06.090889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957747 ] 00:06:37.717 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.717 [2024-05-15 12:08:06.161179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.717 [2024-05-15 12:08:06.229466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.977 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.978 12:08:06 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.915 12:08:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.916 12:08:07 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.916 00:06:38.916 real 0m1.373s 00:06:38.916 user 0m1.245s 00:06:38.916 sys 0m0.133s 00:06:38.916 12:08:07 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:38.916 12:08:07 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:38.916 ************************************ 00:06:38.916 END TEST accel_decmop_full 00:06:38.916 ************************************ 00:06:39.207 12:08:07 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.207 12:08:07 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:39.207 12:08:07 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:39.207 12:08:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.207 ************************************ 00:06:39.207 START TEST accel_decomp_mcore 00:06:39.207 ************************************ 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:39.207 [2024-05-15 12:08:07.541049] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:39.207 [2024-05-15 12:08:07.541105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958032 ] 00:06:39.207 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.207 [2024-05-15 12:08:07.610923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.207 [2024-05-15 12:08:07.680843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.207 [2024-05-15 12:08:07.680939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.207 [2024-05-15 12:08:07.681000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.207 [2024-05-15 12:08:07.681003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.207 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.208 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.208 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:39.208 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.208 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.208 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.208 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.467 12:08:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.405 00:06:40.405 real 0m1.376s 00:06:40.405 user 0m4.581s 00:06:40.405 sys 0m0.139s 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:40.405 12:08:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:40.405 ************************************ 00:06:40.405 END TEST accel_decomp_mcore 00:06:40.405 ************************************ 00:06:40.405 12:08:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.405 12:08:08 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:40.405 12:08:08 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:40.405 12:08:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.664 ************************************ 00:06:40.664 START TEST accel_decomp_full_mcore 00:06:40.664 ************************************ 00:06:40.664 12:08:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.664 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:40.664 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:40.664 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.664 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.664 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:40.665 12:08:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:40.665 [2024-05-15 12:08:09.013522] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:40.665 [2024-05-15 12:08:09.013579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958319 ] 00:06:40.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.665 [2024-05-15 12:08:09.084245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.665 [2024-05-15 12:08:09.157009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.665 [2024-05-15 12:08:09.157105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.665 [2024-05-15 12:08:09.157196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.665 [2024-05-15 12:08:09.157200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.925 12:08:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.864 00:06:41.864 real 0m1.396s 00:06:41.864 user 0m4.625s 00:06:41.864 sys 0m0.141s 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:41.864 12:08:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:41.864 ************************************ 00:06:41.864 END TEST accel_decomp_full_mcore 00:06:41.864 ************************************ 00:06:42.124 12:08:10 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:42.124 12:08:10 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:42.124 12:08:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:42.124 12:08:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.124 ************************************ 00:06:42.124 START TEST accel_decomp_mthread 00:06:42.124 ************************************ 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:42.124 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:42.124 [2024-05-15 12:08:10.501285] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:42.124 [2024-05-15 12:08:10.501347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958605 ] 00:06:42.124 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.124 [2024-05-15 12:08:10.573764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.124 [2024-05-15 12:08:10.646099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:42.385 12:08:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.322 00:06:43.322 real 0m1.381s 00:06:43.322 user 0m1.260s 00:06:43.322 sys 0m0.135s 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:43.322 12:08:11 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:43.582 ************************************ 00:06:43.582 END TEST accel_decomp_mthread 00:06:43.582 ************************************ 00:06:43.582 12:08:11 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:43.582 12:08:11 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:43.582 12:08:11 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:43.582 12:08:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.582 ************************************ 00:06:43.582 START TEST accel_decomp_full_mthread 00:06:43.582 ************************************ 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:43.582 12:08:11 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:43.582 [2024-05-15 12:08:11.972389] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:43.582 [2024-05-15 12:08:11.972458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958893 ] 00:06:43.582 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.582 [2024-05-15 12:08:12.044237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.842 [2024-05-15 12:08:12.113928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.842 12:08:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.231 00:06:45.231 real 0m1.391s 00:06:45.231 user 0m1.264s 00:06:45.231 sys 0m0.140s 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:45.231 12:08:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:45.231 ************************************ 00:06:45.231 END TEST accel_decomp_full_mthread 00:06:45.231 ************************************ 00:06:45.231 12:08:13 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:45.231 12:08:13 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:45.231 12:08:13 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:45.231 12:08:13 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:45.231 12:08:13 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:45.231 12:08:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.231 12:08:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.231 12:08:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.231 12:08:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.231 12:08:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.231 12:08:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.231 12:08:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:45.231 12:08:13 accel -- accel/accel.sh@41 -- # jq -r . 00:06:45.231 ************************************ 00:06:45.231 START TEST accel_dif_functional_tests 00:06:45.231 ************************************ 00:06:45.231 12:08:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:45.231 [2024-05-15 12:08:13.471275] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:45.231 [2024-05-15 12:08:13.471319] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959176 ] 00:06:45.231 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.231 [2024-05-15 12:08:13.537048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.232 [2024-05-15 12:08:13.609682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.232 [2024-05-15 12:08:13.609779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.232 [2024-05-15 12:08:13.609780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.232 00:06:45.232 00:06:45.232 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.232 http://cunit.sourceforge.net/ 00:06:45.232 00:06:45.232 00:06:45.232 Suite: accel_dif 00:06:45.232 Test: verify: DIF generated, GUARD check ...passed 00:06:45.232 Test: verify: DIF generated, APPTAG check ...passed 00:06:45.232 Test: verify: DIF generated, REFTAG check ...passed 00:06:45.232 Test: verify: DIF not generated, GUARD check ...[2024-05-15 12:08:13.678049] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.232 [2024-05-15 12:08:13.678095] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:45.232 passed 00:06:45.232 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 12:08:13.678129] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.232 [2024-05-15 12:08:13.678151] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:45.232 passed 00:06:45.232 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 12:08:13.678172] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.232 [2024-05-15 12:08:13.678198] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:45.232 passed 00:06:45.232 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:45.232 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 12:08:13.678246] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:45.232 passed 00:06:45.232 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:45.232 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:45.232 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:45.232 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 12:08:13.678363] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:45.232 passed 00:06:45.232 Test: generate copy: DIF generated, GUARD check ...passed 00:06:45.232 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:45.232 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:45.232 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:45.232 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:45.232 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:45.232 Test: generate copy: iovecs-len validate ...[2024-05-15 12:08:13.678541] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:45.232 passed 00:06:45.232 Test: generate copy: buffer alignment validate ...passed 00:06:45.232 00:06:45.232 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.232 suites 1 1 n/a 0 0 00:06:45.232 tests 20 20 20 0 0 00:06:45.232 asserts 204 204 204 0 n/a 00:06:45.232 00:06:45.232 Elapsed time = 0.002 seconds 00:06:45.491 00:06:45.491 real 0m0.440s 00:06:45.491 user 0m0.594s 00:06:45.491 sys 0m0.161s 00:06:45.491 12:08:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:45.491 12:08:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:45.491 ************************************ 00:06:45.491 END TEST accel_dif_functional_tests 00:06:45.491 ************************************ 00:06:45.491 00:06:45.491 real 0m32.226s 00:06:45.491 user 0m35.067s 00:06:45.491 sys 0m5.036s 00:06:45.491 12:08:13 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:45.491 12:08:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.491 ************************************ 00:06:45.491 END TEST accel 00:06:45.491 ************************************ 00:06:45.491 12:08:13 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:45.491 12:08:13 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:45.491 12:08:13 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:45.491 12:08:13 -- common/autotest_common.sh@10 -- # set +x 00:06:45.491 ************************************ 00:06:45.491 START TEST accel_rpc 00:06:45.491 ************************************ 00:06:45.491 12:08:13 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:45.751 * Looking for test storage... 00:06:45.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:45.751 12:08:14 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.751 12:08:14 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1959295 00:06:45.751 12:08:14 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1959295 00:06:45.751 12:08:14 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:45.751 12:08:14 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 1959295 ']' 00:06:45.751 12:08:14 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.751 12:08:14 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:45.751 12:08:14 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.751 12:08:14 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:45.751 12:08:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.751 [2024-05-15 12:08:14.161478] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:45.751 [2024-05-15 12:08:14.161537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959295 ] 00:06:45.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.751 [2024-05-15 12:08:14.232667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.010 [2024-05-15 12:08:14.303975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.579 12:08:14 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:46.579 12:08:14 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:46.579 12:08:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:46.579 12:08:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:46.579 12:08:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:46.579 12:08:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:46.579 12:08:14 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:46.579 12:08:14 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:46.579 12:08:14 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:46.579 12:08:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.579 ************************************ 00:06:46.579 START TEST accel_assign_opcode 00:06:46.579 ************************************ 00:06:46.579 12:08:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:06:46.579 12:08:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:46.579 12:08:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.579 12:08:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:46.579 [2024-05-15 12:08:14.990040] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:46.579 12:08:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.579 12:08:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:46.579 12:08:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.579 12:08:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:46.579 [2024-05-15 12:08:14.998058] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:46.579 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.579 12:08:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:46.579 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.579 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.838 software 00:06:46.838 00:06:46.838 real 0m0.234s 00:06:46.838 user 0m0.040s 00:06:46.838 sys 0m0.014s 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:46.838 12:08:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:46.838 ************************************ 00:06:46.838 END TEST accel_assign_opcode 00:06:46.838 ************************************ 00:06:46.838 12:08:15 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1959295 00:06:46.838 12:08:15 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 1959295 ']' 00:06:46.838 12:08:15 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 1959295 00:06:46.838 12:08:15 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:06:46.838 12:08:15 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:46.839 12:08:15 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1959295 00:06:46.839 12:08:15 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:46.839 12:08:15 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:46.839 12:08:15 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1959295' 00:06:46.839 killing process with pid 1959295 00:06:46.839 12:08:15 accel_rpc -- common/autotest_common.sh@966 -- # kill 1959295 00:06:46.839 12:08:15 accel_rpc -- common/autotest_common.sh@971 -- # wait 1959295 00:06:47.407 00:06:47.407 real 0m1.653s 00:06:47.407 user 0m1.678s 00:06:47.407 sys 0m0.489s 00:06:47.407 12:08:15 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:47.407 12:08:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.407 ************************************ 00:06:47.407 END TEST accel_rpc 00:06:47.407 ************************************ 00:06:47.407 12:08:15 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:47.407 12:08:15 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:47.407 12:08:15 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:47.407 12:08:15 -- common/autotest_common.sh@10 -- # set +x 00:06:47.407 ************************************ 00:06:47.407 START TEST app_cmdline 00:06:47.407 ************************************ 00:06:47.407 12:08:15 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:47.407 * Looking for test storage... 00:06:47.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:47.407 12:08:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:47.407 12:08:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1959755 00:06:47.407 12:08:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1959755 00:06:47.407 12:08:15 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:47.407 12:08:15 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 1959755 ']' 00:06:47.407 12:08:15 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.407 12:08:15 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:47.407 12:08:15 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.407 12:08:15 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:47.407 12:08:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.407 [2024-05-15 12:08:15.901292] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:06:47.407 [2024-05-15 12:08:15.901343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959755 ] 00:06:47.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.667 [2024-05-15 12:08:15.971751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.667 [2024-05-15 12:08:16.043460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.236 12:08:16 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:48.236 12:08:16 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:06:48.236 12:08:16 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:48.495 { 00:06:48.495 "version": "SPDK v24.05-pre git sha1 62bc4f069", 00:06:48.495 "fields": { 00:06:48.495 "major": 24, 00:06:48.495 "minor": 5, 00:06:48.495 "patch": 0, 00:06:48.495 "suffix": "-pre", 00:06:48.495 "commit": "62bc4f069" 00:06:48.495 } 00:06:48.495 } 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.495 12:08:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:48.495 12:08:16 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.756 request: 00:06:48.756 { 00:06:48.756 "method": "env_dpdk_get_mem_stats", 00:06:48.756 "req_id": 1 00:06:48.756 } 00:06:48.756 Got JSON-RPC error response 00:06:48.756 response: 00:06:48.756 { 00:06:48.756 "code": -32601, 00:06:48.756 "message": "Method not found" 00:06:48.756 } 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:48.756 12:08:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1959755 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 1959755 ']' 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 1959755 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1959755 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1959755' 00:06:48.756 killing process with pid 1959755 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@966 -- # kill 1959755 00:06:48.756 12:08:17 app_cmdline -- common/autotest_common.sh@971 -- # wait 1959755 00:06:49.016 00:06:49.016 real 0m1.724s 00:06:49.016 user 0m1.977s 00:06:49.016 sys 0m0.514s 00:06:49.016 12:08:17 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:49.016 12:08:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.016 ************************************ 00:06:49.016 END TEST app_cmdline 00:06:49.016 ************************************ 00:06:49.016 12:08:17 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:49.016 12:08:17 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:49.016 12:08:17 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:49.016 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:49.326 ************************************ 00:06:49.326 START TEST version 00:06:49.326 ************************************ 00:06:49.326 12:08:17 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:49.326 * Looking for test storage... 00:06:49.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:49.326 12:08:17 version -- app/version.sh@17 -- # get_header_version major 00:06:49.326 12:08:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.326 12:08:17 version -- app/version.sh@14 -- # cut -f2 00:06:49.326 12:08:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.326 12:08:17 version -- app/version.sh@17 -- # major=24 00:06:49.326 12:08:17 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.326 12:08:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.326 12:08:17 version -- app/version.sh@14 -- # cut -f2 00:06:49.326 12:08:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.326 12:08:17 version -- app/version.sh@18 -- # minor=5 00:06:49.326 12:08:17 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.326 12:08:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.326 12:08:17 version -- app/version.sh@14 -- # cut -f2 00:06:49.326 12:08:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.326 12:08:17 version -- app/version.sh@19 -- # patch=0 00:06:49.326 12:08:17 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.326 12:08:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.326 12:08:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.326 12:08:17 version -- app/version.sh@14 -- # cut -f2 00:06:49.326 12:08:17 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.326 12:08:17 version -- app/version.sh@22 -- # version=24.5 00:06:49.326 12:08:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.326 12:08:17 version -- app/version.sh@28 -- # version=24.5rc0 00:06:49.326 12:08:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:49.326 12:08:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.326 12:08:17 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:49.326 12:08:17 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:49.326 00:06:49.326 real 0m0.180s 00:06:49.326 user 0m0.088s 00:06:49.326 sys 0m0.137s 00:06:49.326 12:08:17 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:49.326 12:08:17 version -- common/autotest_common.sh@10 -- # set +x 00:06:49.326 ************************************ 00:06:49.326 END TEST version 00:06:49.326 ************************************ 00:06:49.326 12:08:17 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:49.326 12:08:17 -- spdk/autotest.sh@194 -- # uname -s 00:06:49.326 12:08:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:49.326 12:08:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.326 12:08:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.326 12:08:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:49.326 12:08:17 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:49.326 12:08:17 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:49.326 12:08:17 -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:49.326 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:49.326 12:08:17 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:49.326 12:08:17 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:49.326 12:08:17 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:49.326 12:08:17 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:49.326 12:08:17 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:49.326 12:08:17 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:49.326 12:08:17 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.326 12:08:17 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:49.326 12:08:17 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:49.326 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:49.586 ************************************ 00:06:49.586 START TEST nvmf_tcp 00:06:49.586 ************************************ 00:06:49.586 12:08:17 nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.586 * Looking for test storage... 00:06:49.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.586 12:08:17 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.586 12:08:18 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.586 12:08:18 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.586 12:08:18 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.586 12:08:18 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.586 12:08:18 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.586 12:08:18 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.586 12:08:18 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:49.586 12:08:18 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:49.586 12:08:18 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:49.586 12:08:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:49.586 12:08:18 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:49.586 12:08:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:49.586 12:08:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:49.586 12:08:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.586 ************************************ 00:06:49.586 START TEST nvmf_example 00:06:49.586 ************************************ 00:06:49.586 12:08:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:49.846 * Looking for test storage... 00:06:49.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.846 12:08:18 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:49.847 12:08:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:56.419 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.419 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:56.419 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:56.419 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:56.420 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:56.420 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:56.420 Found net devices under 0000:af:00.0: cvl_0_0 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:56.420 Found net devices under 0000:af:00.1: cvl_0_1 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.420 12:08:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:56.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:06:56.710 00:06:56.710 --- 10.0.0.2 ping statistics --- 00:06:56.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.710 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:06:56.710 00:06:56.710 --- 10.0.0.1 ping statistics --- 00:06:56.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.710 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1963629 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1963629 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 1963629 ']' 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:56.710 12:08:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:56.969 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:57.906 12:08:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:57.906 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.886 Initializing NVMe Controllers 00:07:07.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:07.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:07.886 Initialization complete. Launching workers. 00:07:07.886 ======================================================== 00:07:07.886 Latency(us) 00:07:07.886 Device Information : IOPS MiB/s Average min max 00:07:07.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14360.30 56.09 4456.46 686.76 15481.91 00:07:07.886 ======================================================== 00:07:07.886 Total : 14360.30 56.09 4456.46 686.76 15481.91 00:07:07.886 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.886 rmmod nvme_tcp 00:07:07.886 rmmod nvme_fabrics 00:07:07.886 rmmod nvme_keyring 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1963629 ']' 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1963629 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 1963629 ']' 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 1963629 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:07.886 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1963629 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1963629' 00:07:08.144 killing process with pid 1963629 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 1963629 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 1963629 00:07:08.144 nvmf threads initialize successfully 00:07:08.144 bdev subsystem init successfully 00:07:08.144 created a nvmf target service 00:07:08.144 create targets's poll groups done 00:07:08.144 all subsystems of target started 00:07:08.144 nvmf target is running 00:07:08.144 all subsystems of target stopped 00:07:08.144 destroy targets's poll groups done 00:07:08.144 destroyed the nvmf target service 00:07:08.144 bdev subsystem finish successfully 00:07:08.144 nvmf threads destroy successfully 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.144 12:08:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.684 12:08:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.684 12:08:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:10.684 12:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:10.684 12:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.684 00:07:10.684 real 0m20.696s 00:07:10.684 user 0m45.283s 00:07:10.684 sys 0m7.435s 00:07:10.684 12:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:10.684 12:08:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.684 ************************************ 00:07:10.684 END TEST nvmf_example 00:07:10.684 ************************************ 00:07:10.684 12:08:38 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:10.684 12:08:38 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:10.684 12:08:38 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:10.685 12:08:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:10.685 ************************************ 00:07:10.685 START TEST nvmf_filesystem 00:07:10.685 ************************************ 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:10.685 * Looking for test storage... 00:07:10.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:10.685 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:10.685 #define SPDK_CONFIG_H 00:07:10.685 #define SPDK_CONFIG_APPS 1 00:07:10.685 #define SPDK_CONFIG_ARCH native 00:07:10.685 #undef SPDK_CONFIG_ASAN 00:07:10.685 #undef SPDK_CONFIG_AVAHI 00:07:10.685 #undef SPDK_CONFIG_CET 00:07:10.685 #define SPDK_CONFIG_COVERAGE 1 00:07:10.685 #define SPDK_CONFIG_CROSS_PREFIX 00:07:10.685 #undef SPDK_CONFIG_CRYPTO 00:07:10.685 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:10.685 #undef SPDK_CONFIG_CUSTOMOCF 00:07:10.685 #undef SPDK_CONFIG_DAOS 00:07:10.685 #define SPDK_CONFIG_DAOS_DIR 00:07:10.685 #define SPDK_CONFIG_DEBUG 1 00:07:10.686 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:10.686 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:10.686 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:10.686 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:10.686 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:10.686 #undef SPDK_CONFIG_DPDK_UADK 00:07:10.686 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:10.686 #define SPDK_CONFIG_EXAMPLES 1 00:07:10.686 #undef SPDK_CONFIG_FC 00:07:10.686 #define SPDK_CONFIG_FC_PATH 00:07:10.686 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:10.686 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:10.686 #undef SPDK_CONFIG_FUSE 00:07:10.686 #undef SPDK_CONFIG_FUZZER 00:07:10.686 #define SPDK_CONFIG_FUZZER_LIB 00:07:10.686 #undef SPDK_CONFIG_GOLANG 00:07:10.686 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:10.686 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:10.686 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:10.686 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:10.686 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:10.686 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:10.686 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:10.686 #define SPDK_CONFIG_IDXD 1 00:07:10.686 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:10.686 #undef SPDK_CONFIG_IPSEC_MB 00:07:10.686 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:10.686 #define SPDK_CONFIG_ISAL 1 00:07:10.686 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:10.686 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:10.686 #define SPDK_CONFIG_LIBDIR 00:07:10.686 #undef SPDK_CONFIG_LTO 00:07:10.686 #define SPDK_CONFIG_MAX_LCORES 00:07:10.686 #define SPDK_CONFIG_NVME_CUSE 1 00:07:10.686 #undef SPDK_CONFIG_OCF 00:07:10.686 #define SPDK_CONFIG_OCF_PATH 00:07:10.686 #define SPDK_CONFIG_OPENSSL_PATH 00:07:10.686 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:10.686 #define SPDK_CONFIG_PGO_DIR 00:07:10.686 #undef SPDK_CONFIG_PGO_USE 00:07:10.686 #define SPDK_CONFIG_PREFIX /usr/local 00:07:10.686 #undef SPDK_CONFIG_RAID5F 00:07:10.686 #undef SPDK_CONFIG_RBD 00:07:10.686 #define SPDK_CONFIG_RDMA 1 00:07:10.686 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:10.686 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:10.686 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:10.686 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:10.686 #define SPDK_CONFIG_SHARED 1 00:07:10.686 #undef SPDK_CONFIG_SMA 00:07:10.686 #define SPDK_CONFIG_TESTS 1 00:07:10.686 #undef SPDK_CONFIG_TSAN 00:07:10.686 #define SPDK_CONFIG_UBLK 1 00:07:10.686 #define SPDK_CONFIG_UBSAN 1 00:07:10.686 #undef SPDK_CONFIG_UNIT_TESTS 00:07:10.686 #undef SPDK_CONFIG_URING 00:07:10.686 #define SPDK_CONFIG_URING_PATH 00:07:10.686 #undef SPDK_CONFIG_URING_ZNS 00:07:10.686 #undef SPDK_CONFIG_USDT 00:07:10.686 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:10.686 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:10.686 #define SPDK_CONFIG_VFIO_USER 1 00:07:10.686 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:10.686 #define SPDK_CONFIG_VHOST 1 00:07:10.686 #define SPDK_CONFIG_VIRTIO 1 00:07:10.686 #undef SPDK_CONFIG_VTUNE 00:07:10.686 #define SPDK_CONFIG_VTUNE_DIR 00:07:10.686 #define SPDK_CONFIG_WERROR 1 00:07:10.686 #define SPDK_CONFIG_WPDK_DIR 00:07:10.686 #undef SPDK_CONFIG_XNVME 00:07:10.686 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:10.686 12:08:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:10.686 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:10.687 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1966024 ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1966024 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.9zdtTV 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.9zdtTV/tests/target /tmp/spdk.9zdtTV 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=972304384 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4312125440 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=52292648960 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742292992 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9449644032 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30867771392 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12339077120 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9383936 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30869766144 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1380352 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:10.688 * Looking for test storage... 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=52292648960 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=11664236544 00:07:10.688 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.689 12:08:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.258 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:17.259 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:17.259 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:17.259 Found net devices under 0000:af:00.0: cvl_0_0 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:17.259 Found net devices under 0000:af:00.1: cvl_0_1 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:17.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:07:17.259 00:07:17.259 --- 10.0.0.2 ping statistics --- 00:07:17.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.259 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:07:17.259 00:07:17.259 --- 10.0.0.1 ping statistics --- 00:07:17.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.259 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:07:17.259 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:17.260 12:08:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:17.520 ************************************ 00:07:17.520 START TEST nvmf_filesystem_no_in_capsule 00:07:17.520 ************************************ 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1969295 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1969295 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 1969295 ']' 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.520 12:08:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.520 [2024-05-15 12:08:45.869531] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:07:17.520 [2024-05-15 12:08:45.869573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.520 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.520 [2024-05-15 12:08:45.943445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.520 [2024-05-15 12:08:46.020172] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.520 [2024-05-15 12:08:46.020210] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.520 [2024-05-15 12:08:46.020219] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.520 [2024-05-15 12:08:46.020228] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.520 [2024-05-15 12:08:46.020235] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.520 [2024-05-15 12:08:46.020282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.520 [2024-05-15 12:08:46.020373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.520 [2024-05-15 12:08:46.020463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.520 [2024-05-15 12:08:46.020465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.459 [2024-05-15 12:08:46.724081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.459 Malloc1 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.459 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.460 [2024-05-15 12:08:46.874064] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:18.460 [2024-05-15 12:08:46.874323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:07:18.460 { 00:07:18.460 "name": "Malloc1", 00:07:18.460 "aliases": [ 00:07:18.460 "513b64ff-6ded-4f89-bf74-fa9985339919" 00:07:18.460 ], 00:07:18.460 "product_name": "Malloc disk", 00:07:18.460 "block_size": 512, 00:07:18.460 "num_blocks": 1048576, 00:07:18.460 "uuid": "513b64ff-6ded-4f89-bf74-fa9985339919", 00:07:18.460 "assigned_rate_limits": { 00:07:18.460 "rw_ios_per_sec": 0, 00:07:18.460 "rw_mbytes_per_sec": 0, 00:07:18.460 "r_mbytes_per_sec": 0, 00:07:18.460 "w_mbytes_per_sec": 0 00:07:18.460 }, 00:07:18.460 "claimed": true, 00:07:18.460 "claim_type": "exclusive_write", 00:07:18.460 "zoned": false, 00:07:18.460 "supported_io_types": { 00:07:18.460 "read": true, 00:07:18.460 "write": true, 00:07:18.460 "unmap": true, 00:07:18.460 "write_zeroes": true, 00:07:18.460 "flush": true, 00:07:18.460 "reset": true, 00:07:18.460 "compare": false, 00:07:18.460 "compare_and_write": false, 00:07:18.460 "abort": true, 00:07:18.460 "nvme_admin": false, 00:07:18.460 "nvme_io": false 00:07:18.460 }, 00:07:18.460 "memory_domains": [ 00:07:18.460 { 00:07:18.460 "dma_device_id": "system", 00:07:18.460 "dma_device_type": 1 00:07:18.460 }, 00:07:18.460 { 00:07:18.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.460 "dma_device_type": 2 00:07:18.460 } 00:07:18.460 ], 00:07:18.460 "driver_specific": {} 00:07:18.460 } 00:07:18.460 ]' 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:07:18.460 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:07:18.720 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:07:18.720 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:07:18.720 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:07:18.720 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:18.720 12:08:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.100 12:08:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.100 12:08:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:07:20.100 12:08:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.100 12:08:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:20.100 12:08:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:22.008 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:22.271 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:22.582 12:08:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:23.521 12:08:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:23.521 12:08:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:23.521 12:08:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:23.522 12:08:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:23.522 12:08:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.522 ************************************ 00:07:23.522 START TEST filesystem_ext4 00:07:23.522 ************************************ 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:07:23.522 12:08:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:23.522 mke2fs 1.46.5 (30-Dec-2021) 00:07:23.842 Discarding device blocks: 0/522240 done 00:07:23.842 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:23.842 Filesystem UUID: a16d4c6c-aa10-4a24-b119-1f10fcb4ec0b 00:07:23.842 Superblock backups stored on blocks: 00:07:23.842 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:23.842 00:07:23.842 Allocating group tables: 0/64 done 00:07:23.842 Writing inode tables: 0/64 done 00:07:23.842 Creating journal (8192 blocks): done 00:07:24.790 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:07:24.790 00:07:24.790 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:07:24.790 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1969295 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.057 00:07:25.057 real 0m1.442s 00:07:25.057 user 0m0.032s 00:07:25.057 sys 0m0.075s 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:25.057 ************************************ 00:07:25.057 END TEST filesystem_ext4 00:07:25.057 ************************************ 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.057 ************************************ 00:07:25.057 START TEST filesystem_btrfs 00:07:25.057 ************************************ 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:07:25.057 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:25.316 btrfs-progs v6.6.2 00:07:25.316 See https://btrfs.readthedocs.io for more information. 00:07:25.316 00:07:25.316 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:25.316 NOTE: several default settings have changed in version 5.15, please make sure 00:07:25.316 this does not affect your deployments: 00:07:25.316 - DUP for metadata (-m dup) 00:07:25.316 - enabled no-holes (-O no-holes) 00:07:25.316 - enabled free-space-tree (-R free-space-tree) 00:07:25.316 00:07:25.316 Label: (null) 00:07:25.316 UUID: 67192ca6-2d87-4076-a5bb-d28625aa594d 00:07:25.316 Node size: 16384 00:07:25.316 Sector size: 4096 00:07:25.316 Filesystem size: 510.00MiB 00:07:25.316 Block group profiles: 00:07:25.316 Data: single 8.00MiB 00:07:25.316 Metadata: DUP 32.00MiB 00:07:25.316 System: DUP 8.00MiB 00:07:25.316 SSD detected: yes 00:07:25.316 Zoned device: no 00:07:25.316 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:25.316 Runtime features: free-space-tree 00:07:25.316 Checksum: crc32c 00:07:25.316 Number of devices: 1 00:07:25.316 Devices: 00:07:25.316 ID SIZE PATH 00:07:25.316 1 510.00MiB /dev/nvme0n1p1 00:07:25.316 00:07:25.316 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:07:25.317 12:08:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1969295 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.883 00:07:25.883 real 0m0.766s 00:07:25.883 user 0m0.033s 00:07:25.883 sys 0m0.140s 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:25.883 ************************************ 00:07:25.883 END TEST filesystem_btrfs 00:07:25.883 ************************************ 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.883 ************************************ 00:07:25.883 START TEST filesystem_xfs 00:07:25.883 ************************************ 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:07:25.883 12:08:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:26.142 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:26.142 = sectsz=512 attr=2, projid32bit=1 00:07:26.142 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:26.142 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:26.142 data = bsize=4096 blocks=130560, imaxpct=25 00:07:26.142 = sunit=0 swidth=0 blks 00:07:26.142 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:26.142 log =internal log bsize=4096 blocks=16384, version=2 00:07:26.142 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:26.142 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:27.079 Discarding blocks...Done. 00:07:27.079 12:08:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:07:27.079 12:08:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1969295 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.983 00:07:28.983 real 0m3.019s 00:07:28.983 user 0m0.035s 00:07:28.983 sys 0m0.077s 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:28.983 ************************************ 00:07:28.983 END TEST filesystem_xfs 00:07:28.983 ************************************ 00:07:28.983 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1969295 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 1969295 ']' 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 1969295 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1969295 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1969295' 00:07:29.551 killing process with pid 1969295 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 1969295 00:07:29.551 [2024-05-15 12:08:57.981525] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:29.551 12:08:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 1969295 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:30.119 00:07:30.119 real 0m12.528s 00:07:30.119 user 0m48.795s 00:07:30.119 sys 0m1.784s 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.119 ************************************ 00:07:30.119 END TEST nvmf_filesystem_no_in_capsule 00:07:30.119 ************************************ 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.119 ************************************ 00:07:30.119 START TEST nvmf_filesystem_in_capsule 00:07:30.119 ************************************ 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1971647 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1971647 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 1971647 ']' 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.119 12:08:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:30.119 [2024-05-15 12:08:58.482011] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:07:30.119 [2024-05-15 12:08:58.482054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.119 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.119 [2024-05-15 12:08:58.556029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.119 [2024-05-15 12:08:58.630900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.119 [2024-05-15 12:08:58.630935] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.119 [2024-05-15 12:08:58.630944] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.119 [2024-05-15 12:08:58.630952] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.119 [2024-05-15 12:08:58.630975] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.119 [2024-05-15 12:08:58.631021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.119 [2024-05-15 12:08:58.631133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.119 [2024-05-15 12:08:58.631220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.119 [2024-05-15 12:08:58.631222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.057 [2024-05-15 12:08:59.330986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.057 Malloc1 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.057 [2024-05-15 12:08:59.473618] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:31.057 [2024-05-15 12:08:59.473866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.057 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:07:31.058 { 00:07:31.058 "name": "Malloc1", 00:07:31.058 "aliases": [ 00:07:31.058 "57a6a2c4-0b77-4a6f-b2e0-e3000cbceb7a" 00:07:31.058 ], 00:07:31.058 "product_name": "Malloc disk", 00:07:31.058 "block_size": 512, 00:07:31.058 "num_blocks": 1048576, 00:07:31.058 "uuid": "57a6a2c4-0b77-4a6f-b2e0-e3000cbceb7a", 00:07:31.058 "assigned_rate_limits": { 00:07:31.058 "rw_ios_per_sec": 0, 00:07:31.058 "rw_mbytes_per_sec": 0, 00:07:31.058 "r_mbytes_per_sec": 0, 00:07:31.058 "w_mbytes_per_sec": 0 00:07:31.058 }, 00:07:31.058 "claimed": true, 00:07:31.058 "claim_type": "exclusive_write", 00:07:31.058 "zoned": false, 00:07:31.058 "supported_io_types": { 00:07:31.058 "read": true, 00:07:31.058 "write": true, 00:07:31.058 "unmap": true, 00:07:31.058 "write_zeroes": true, 00:07:31.058 "flush": true, 00:07:31.058 "reset": true, 00:07:31.058 "compare": false, 00:07:31.058 "compare_and_write": false, 00:07:31.058 "abort": true, 00:07:31.058 "nvme_admin": false, 00:07:31.058 "nvme_io": false 00:07:31.058 }, 00:07:31.058 "memory_domains": [ 00:07:31.058 { 00:07:31.058 "dma_device_id": "system", 00:07:31.058 "dma_device_type": 1 00:07:31.058 }, 00:07:31.058 { 00:07:31.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.058 "dma_device_type": 2 00:07:31.058 } 00:07:31.058 ], 00:07:31.058 "driver_specific": {} 00:07:31.058 } 00:07:31.058 ]' 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:07:31.058 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:07:31.317 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:07:31.317 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:07:31.317 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:07:31.317 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:31.317 12:08:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.695 12:09:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.695 12:09:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:07:32.695 12:09:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.695 12:09:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:32.695 12:09:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:07:34.599 12:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:34.599 12:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:34.599 12:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.599 12:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:34.599 12:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.599 12:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:07:34.599 12:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:34.599 12:09:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:34.599 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:34.857 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:35.424 12:09:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.362 ************************************ 00:07:36.362 START TEST filesystem_in_capsule_ext4 00:07:36.362 ************************************ 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:07:36.362 12:09:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:36.362 mke2fs 1.46.5 (30-Dec-2021) 00:07:36.621 Discarding device blocks: 0/522240 done 00:07:36.621 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:36.621 Filesystem UUID: d24826f5-e7d5-484e-ae89-aa7638ae2aa1 00:07:36.621 Superblock backups stored on blocks: 00:07:36.621 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:36.621 00:07:36.621 Allocating group tables: 0/64 done 00:07:36.621 Writing inode tables: 0/64 done 00:07:39.157 Creating journal (8192 blocks): done 00:07:40.243 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:07:40.243 00:07:40.243 12:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:07:40.243 12:09:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1971647 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:40.812 00:07:40.812 real 0m4.386s 00:07:40.812 user 0m0.018s 00:07:40.812 sys 0m0.092s 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:40.812 ************************************ 00:07:40.812 END TEST filesystem_in_capsule_ext4 00:07:40.812 ************************************ 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:40.812 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.072 ************************************ 00:07:41.072 START TEST filesystem_in_capsule_btrfs 00:07:41.072 ************************************ 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:07:41.072 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:41.072 btrfs-progs v6.6.2 00:07:41.072 See https://btrfs.readthedocs.io for more information. 00:07:41.072 00:07:41.072 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:41.072 NOTE: several default settings have changed in version 5.15, please make sure 00:07:41.072 this does not affect your deployments: 00:07:41.072 - DUP for metadata (-m dup) 00:07:41.072 - enabled no-holes (-O no-holes) 00:07:41.072 - enabled free-space-tree (-R free-space-tree) 00:07:41.072 00:07:41.072 Label: (null) 00:07:41.072 UUID: c197396d-e2b9-4860-9c4c-1d854e4263c8 00:07:41.072 Node size: 16384 00:07:41.072 Sector size: 4096 00:07:41.072 Filesystem size: 510.00MiB 00:07:41.073 Block group profiles: 00:07:41.073 Data: single 8.00MiB 00:07:41.073 Metadata: DUP 32.00MiB 00:07:41.073 System: DUP 8.00MiB 00:07:41.073 SSD detected: yes 00:07:41.073 Zoned device: no 00:07:41.073 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:41.073 Runtime features: free-space-tree 00:07:41.073 Checksum: crc32c 00:07:41.073 Number of devices: 1 00:07:41.073 Devices: 00:07:41.073 ID SIZE PATH 00:07:41.073 1 510.00MiB /dev/nvme0n1p1 00:07:41.073 00:07:41.073 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:07:41.073 12:09:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.011 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.011 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:42.011 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.011 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:42.011 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:42.011 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1971647 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.271 00:07:42.271 real 0m1.224s 00:07:42.271 user 0m0.033s 00:07:42.271 sys 0m0.141s 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.271 ************************************ 00:07:42.271 END TEST filesystem_in_capsule_btrfs 00:07:42.271 ************************************ 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.271 ************************************ 00:07:42.271 START TEST filesystem_in_capsule_xfs 00:07:42.271 ************************************ 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:07:42.271 12:09:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:42.271 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:42.271 = sectsz=512 attr=2, projid32bit=1 00:07:42.271 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:42.271 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:42.271 data = bsize=4096 blocks=130560, imaxpct=25 00:07:42.271 = sunit=0 swidth=0 blks 00:07:42.271 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:42.271 log =internal log bsize=4096 blocks=16384, version=2 00:07:42.271 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:42.271 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:43.210 Discarding blocks...Done. 00:07:43.210 12:09:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:07:43.210 12:09:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1971647 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.746 00:07:45.746 real 0m3.507s 00:07:45.746 user 0m0.031s 00:07:45.746 sys 0m0.082s 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:45.746 ************************************ 00:07:45.746 END TEST filesystem_in_capsule_xfs 00:07:45.746 ************************************ 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:45.746 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:46.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1971647 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 1971647 ']' 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 1971647 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:07:46.006 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:46.007 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1971647 00:07:46.007 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:46.007 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:46.007 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1971647' 00:07:46.007 killing process with pid 1971647 00:07:46.007 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 1971647 00:07:46.007 [2024-05-15 12:09:14.505531] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:46.007 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 1971647 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:46.578 00:07:46.578 real 0m16.440s 00:07:46.578 user 1m4.222s 00:07:46.578 sys 0m1.956s 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.578 ************************************ 00:07:46.578 END TEST nvmf_filesystem_in_capsule 00:07:46.578 ************************************ 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.578 rmmod nvme_tcp 00:07:46.578 rmmod nvme_fabrics 00:07:46.578 rmmod nvme_keyring 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.578 12:09:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.131 12:09:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.131 00:07:49.131 real 0m38.229s 00:07:49.131 user 1m54.978s 00:07:49.131 sys 0m9.043s 00:07:49.131 12:09:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:49.131 12:09:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.131 ************************************ 00:07:49.131 END TEST nvmf_filesystem 00:07:49.131 ************************************ 00:07:49.132 12:09:17 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:49.132 12:09:17 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:49.132 12:09:17 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:49.132 12:09:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.132 ************************************ 00:07:49.132 START TEST nvmf_target_discovery 00:07:49.132 ************************************ 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:49.132 * Looking for test storage... 00:07:49.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.132 12:09:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:55.795 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:55.795 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:55.795 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:55.796 Found net devices under 0000:af:00.0: cvl_0_0 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:55.796 Found net devices under 0000:af:00.1: cvl_0_1 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.796 12:09:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:55.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:07:55.796 00:07:55.796 --- 10.0.0.2 ping statistics --- 00:07:55.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.796 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:07:55.796 00:07:55.796 --- 10.0.0.1 ping statistics --- 00:07:55.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.796 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1978842 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1978842 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 1978842 ']' 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:55.796 12:09:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.796 [2024-05-15 12:09:24.278938] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:07:55.796 [2024-05-15 12:09:24.278984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.796 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.055 [2024-05-15 12:09:24.352097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.055 [2024-05-15 12:09:24.426362] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.055 [2024-05-15 12:09:24.426399] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.055 [2024-05-15 12:09:24.426408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.055 [2024-05-15 12:09:24.426416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.055 [2024-05-15 12:09:24.426439] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.055 [2024-05-15 12:09:24.426480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.055 [2024-05-15 12:09:24.426574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.055 [2024-05-15 12:09:24.426650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.055 [2024-05-15 12:09:24.426652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.621 [2024-05-15 12:09:25.138045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.621 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 Null1 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 [2024-05-15 12:09:25.190154] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:56.879 [2024-05-15 12:09:25.190350] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 Null2 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 Null3 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 Null4 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:56.879 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:07:57.136 00:07:57.136 Discovery Log Number of Records 6, Generation counter 6 00:07:57.136 =====Discovery Log Entry 0====== 00:07:57.136 trtype: tcp 00:07:57.136 adrfam: ipv4 00:07:57.136 subtype: current discovery subsystem 00:07:57.136 treq: not required 00:07:57.136 portid: 0 00:07:57.136 trsvcid: 4420 00:07:57.136 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.136 traddr: 10.0.0.2 00:07:57.136 eflags: explicit discovery connections, duplicate discovery information 00:07:57.136 sectype: none 00:07:57.136 =====Discovery Log Entry 1====== 00:07:57.136 trtype: tcp 00:07:57.136 adrfam: ipv4 00:07:57.136 subtype: nvme subsystem 00:07:57.136 treq: not required 00:07:57.136 portid: 0 00:07:57.136 trsvcid: 4420 00:07:57.136 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:57.136 traddr: 10.0.0.2 00:07:57.136 eflags: none 00:07:57.136 sectype: none 00:07:57.136 =====Discovery Log Entry 2====== 00:07:57.136 trtype: tcp 00:07:57.136 adrfam: ipv4 00:07:57.136 subtype: nvme subsystem 00:07:57.136 treq: not required 00:07:57.136 portid: 0 00:07:57.136 trsvcid: 4420 00:07:57.136 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:57.136 traddr: 10.0.0.2 00:07:57.137 eflags: none 00:07:57.137 sectype: none 00:07:57.137 =====Discovery Log Entry 3====== 00:07:57.137 trtype: tcp 00:07:57.137 adrfam: ipv4 00:07:57.137 subtype: nvme subsystem 00:07:57.137 treq: not required 00:07:57.137 portid: 0 00:07:57.137 trsvcid: 4420 00:07:57.137 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:57.137 traddr: 10.0.0.2 00:07:57.137 eflags: none 00:07:57.137 sectype: none 00:07:57.137 =====Discovery Log Entry 4====== 00:07:57.137 trtype: tcp 00:07:57.137 adrfam: ipv4 00:07:57.137 subtype: nvme subsystem 00:07:57.137 treq: not required 00:07:57.137 portid: 0 00:07:57.137 trsvcid: 4420 00:07:57.137 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:57.137 traddr: 10.0.0.2 00:07:57.137 eflags: none 00:07:57.137 sectype: none 00:07:57.137 =====Discovery Log Entry 5====== 00:07:57.137 trtype: tcp 00:07:57.137 adrfam: ipv4 00:07:57.137 subtype: discovery subsystem referral 00:07:57.137 treq: not required 00:07:57.137 portid: 0 00:07:57.137 trsvcid: 4430 00:07:57.137 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:57.137 traddr: 10.0.0.2 00:07:57.137 eflags: none 00:07:57.137 sectype: none 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:57.137 Perform nvmf subsystem discovery via RPC 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 [ 00:07:57.137 { 00:07:57.137 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:57.137 "subtype": "Discovery", 00:07:57.137 "listen_addresses": [ 00:07:57.137 { 00:07:57.137 "trtype": "TCP", 00:07:57.137 "adrfam": "IPv4", 00:07:57.137 "traddr": "10.0.0.2", 00:07:57.137 "trsvcid": "4420" 00:07:57.137 } 00:07:57.137 ], 00:07:57.137 "allow_any_host": true, 00:07:57.137 "hosts": [] 00:07:57.137 }, 00:07:57.137 { 00:07:57.137 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.137 "subtype": "NVMe", 00:07:57.137 "listen_addresses": [ 00:07:57.137 { 00:07:57.137 "trtype": "TCP", 00:07:57.137 "adrfam": "IPv4", 00:07:57.137 "traddr": "10.0.0.2", 00:07:57.137 "trsvcid": "4420" 00:07:57.137 } 00:07:57.137 ], 00:07:57.137 "allow_any_host": true, 00:07:57.137 "hosts": [], 00:07:57.137 "serial_number": "SPDK00000000000001", 00:07:57.137 "model_number": "SPDK bdev Controller", 00:07:57.137 "max_namespaces": 32, 00:07:57.137 "min_cntlid": 1, 00:07:57.137 "max_cntlid": 65519, 00:07:57.137 "namespaces": [ 00:07:57.137 { 00:07:57.137 "nsid": 1, 00:07:57.137 "bdev_name": "Null1", 00:07:57.137 "name": "Null1", 00:07:57.137 "nguid": "4522BBBA37BD4B5486A0C4727D22F910", 00:07:57.137 "uuid": "4522bbba-37bd-4b54-86a0-c4727d22f910" 00:07:57.137 } 00:07:57.137 ] 00:07:57.137 }, 00:07:57.137 { 00:07:57.137 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:57.137 "subtype": "NVMe", 00:07:57.137 "listen_addresses": [ 00:07:57.137 { 00:07:57.137 "trtype": "TCP", 00:07:57.137 "adrfam": "IPv4", 00:07:57.137 "traddr": "10.0.0.2", 00:07:57.137 "trsvcid": "4420" 00:07:57.137 } 00:07:57.137 ], 00:07:57.137 "allow_any_host": true, 00:07:57.137 "hosts": [], 00:07:57.137 "serial_number": "SPDK00000000000002", 00:07:57.137 "model_number": "SPDK bdev Controller", 00:07:57.137 "max_namespaces": 32, 00:07:57.137 "min_cntlid": 1, 00:07:57.137 "max_cntlid": 65519, 00:07:57.137 "namespaces": [ 00:07:57.137 { 00:07:57.137 "nsid": 1, 00:07:57.137 "bdev_name": "Null2", 00:07:57.137 "name": "Null2", 00:07:57.137 "nguid": "3C566887A169421ABF6D0179EDAA00BC", 00:07:57.137 "uuid": "3c566887-a169-421a-bf6d-0179edaa00bc" 00:07:57.137 } 00:07:57.137 ] 00:07:57.137 }, 00:07:57.137 { 00:07:57.137 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:57.137 "subtype": "NVMe", 00:07:57.137 "listen_addresses": [ 00:07:57.137 { 00:07:57.137 "trtype": "TCP", 00:07:57.137 "adrfam": "IPv4", 00:07:57.137 "traddr": "10.0.0.2", 00:07:57.137 "trsvcid": "4420" 00:07:57.137 } 00:07:57.137 ], 00:07:57.137 "allow_any_host": true, 00:07:57.137 "hosts": [], 00:07:57.137 "serial_number": "SPDK00000000000003", 00:07:57.137 "model_number": "SPDK bdev Controller", 00:07:57.137 "max_namespaces": 32, 00:07:57.137 "min_cntlid": 1, 00:07:57.137 "max_cntlid": 65519, 00:07:57.137 "namespaces": [ 00:07:57.137 { 00:07:57.137 "nsid": 1, 00:07:57.137 "bdev_name": "Null3", 00:07:57.137 "name": "Null3", 00:07:57.137 "nguid": "0FEB1EB821A6424BB8B18D4E6A009D8E", 00:07:57.137 "uuid": "0feb1eb8-21a6-424b-b8b1-8d4e6a009d8e" 00:07:57.137 } 00:07:57.137 ] 00:07:57.137 }, 00:07:57.137 { 00:07:57.137 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:57.137 "subtype": "NVMe", 00:07:57.137 "listen_addresses": [ 00:07:57.137 { 00:07:57.137 "trtype": "TCP", 00:07:57.137 "adrfam": "IPv4", 00:07:57.137 "traddr": "10.0.0.2", 00:07:57.137 "trsvcid": "4420" 00:07:57.137 } 00:07:57.137 ], 00:07:57.137 "allow_any_host": true, 00:07:57.137 "hosts": [], 00:07:57.137 "serial_number": "SPDK00000000000004", 00:07:57.137 "model_number": "SPDK bdev Controller", 00:07:57.137 "max_namespaces": 32, 00:07:57.137 "min_cntlid": 1, 00:07:57.137 "max_cntlid": 65519, 00:07:57.137 "namespaces": [ 00:07:57.137 { 00:07:57.137 "nsid": 1, 00:07:57.137 "bdev_name": "Null4", 00:07:57.137 "name": "Null4", 00:07:57.137 "nguid": "7DC8B50D3D224C7089C7E9AF7A3EB67F", 00:07:57.137 "uuid": "7dc8b50d-3d22-4c70-89c7-e9af7a3eb67f" 00:07:57.137 } 00:07:57.137 ] 00:07:57.137 } 00:07:57.137 ] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.137 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.394 rmmod nvme_tcp 00:07:57.394 rmmod nvme_fabrics 00:07:57.394 rmmod nvme_keyring 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1978842 ']' 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1978842 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 1978842 ']' 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 1978842 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1978842 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1978842' 00:07:57.394 killing process with pid 1978842 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 1978842 00:07:57.394 [2024-05-15 12:09:25.813120] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:57.394 12:09:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 1978842 00:07:57.652 12:09:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.652 12:09:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.652 12:09:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.652 12:09:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.652 12:09:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.652 12:09:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.652 12:09:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.652 12:09:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.559 12:09:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.820 00:07:59.820 real 0m10.919s 00:07:59.820 user 0m8.263s 00:07:59.820 sys 0m5.734s 00:07:59.820 12:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:59.820 12:09:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:59.820 ************************************ 00:07:59.820 END TEST nvmf_target_discovery 00:07:59.820 ************************************ 00:07:59.820 12:09:28 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:59.820 12:09:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:59.820 12:09:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:59.820 12:09:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.820 ************************************ 00:07:59.820 START TEST nvmf_referrals 00:07:59.820 ************************************ 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:59.820 * Looking for test storage... 00:07:59.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.820 12:09:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:07.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:07.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:07.946 Found net devices under 0000:af:00.0: cvl_0_0 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:07.946 Found net devices under 0000:af:00.1: cvl_0_1 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.946 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:08:07.947 00:08:07.947 --- 10.0.0.2 ping statistics --- 00:08:07.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.947 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:08:07.947 00:08:07.947 --- 10.0.0.1 ping statistics --- 00:08:07.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.947 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1982966 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1982966 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 1982966 ']' 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:07.947 12:09:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 [2024-05-15 12:09:35.459467] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:08:07.947 [2024-05-15 12:09:35.459513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.947 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.947 [2024-05-15 12:09:35.532835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.947 [2024-05-15 12:09:35.605568] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.947 [2024-05-15 12:09:35.605611] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.947 [2024-05-15 12:09:35.605620] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.947 [2024-05-15 12:09:35.605629] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.947 [2024-05-15 12:09:35.605636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.947 [2024-05-15 12:09:35.605728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.947 [2024-05-15 12:09:35.605826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.947 [2024-05-15 12:09:35.605854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.947 [2024-05-15 12:09:35.605856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 [2024-05-15 12:09:36.305902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 [2024-05-15 12:09:36.321903] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:07.947 [2024-05-15 12:09:36.322138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.947 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.208 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:08.208 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:08.208 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.208 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.467 12:09:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.727 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.986 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:09.245 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:09.246 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.246 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.505 12:09:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.505 rmmod nvme_tcp 00:08:09.505 rmmod nvme_fabrics 00:08:09.505 rmmod nvme_keyring 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1982966 ']' 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1982966 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 1982966 ']' 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 1982966 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:09.505 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1982966 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1982966' 00:08:09.765 killing process with pid 1982966 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 1982966 00:08:09.765 [2024-05-15 12:09:38.083321] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 1982966 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.765 12:09:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.310 12:09:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:12.310 00:08:12.310 real 0m12.183s 00:08:12.310 user 0m13.361s 00:08:12.310 sys 0m6.231s 00:08:12.310 12:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:12.310 12:09:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.310 ************************************ 00:08:12.310 END TEST nvmf_referrals 00:08:12.310 ************************************ 00:08:12.310 12:09:40 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:12.310 12:09:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:12.310 12:09:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:12.310 12:09:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.310 ************************************ 00:08:12.310 START TEST nvmf_connect_disconnect 00:08:12.310 ************************************ 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:12.310 * Looking for test storage... 00:08:12.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.310 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:12.311 12:09:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:18.894 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:18.894 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:18.894 Found net devices under 0000:af:00.0: cvl_0_0 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.894 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:18.895 Found net devices under 0000:af:00.1: cvl_0_1 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:08:18.895 00:08:18.895 --- 10.0.0.2 ping statistics --- 00:08:18.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.895 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:18.895 00:08:18.895 --- 10.0.0.1 ping statistics --- 00:08:18.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.895 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1987113 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1987113 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 1987113 ']' 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:18.895 12:09:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.154 [2024-05-15 12:09:47.445386] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:08:19.154 [2024-05-15 12:09:47.445431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.154 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.154 [2024-05-15 12:09:47.518118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.154 [2024-05-15 12:09:47.587995] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.154 [2024-05-15 12:09:47.588029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.155 [2024-05-15 12:09:47.588039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.155 [2024-05-15 12:09:47.588048] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.155 [2024-05-15 12:09:47.588055] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.155 [2024-05-15 12:09:47.588104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.155 [2024-05-15 12:09:47.588121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.155 [2024-05-15 12:09:47.588169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.155 [2024-05-15 12:09:47.588170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.722 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:19.722 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:08:19.722 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.722 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:19.722 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.981 [2024-05-15 12:09:48.303106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.981 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.982 [2024-05-15 12:09:48.357693] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:19.982 [2024-05-15 12:09:48.357943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:19.982 12:09:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:24.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:37.340 rmmod nvme_tcp 00:08:37.340 rmmod nvme_fabrics 00:08:37.340 rmmod nvme_keyring 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1987113 ']' 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1987113 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 1987113 ']' 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 1987113 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1987113 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1987113' 00:08:37.340 killing process with pid 1987113 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 1987113 00:08:37.340 [2024-05-15 12:10:05.706083] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:37.340 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 1987113 00:08:37.600 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:37.600 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:37.600 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:37.600 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.600 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:37.600 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.600 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.600 12:10:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.507 12:10:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.507 00:08:39.507 real 0m27.537s 00:08:39.507 user 1m14.183s 00:08:39.507 sys 0m7.125s 00:08:39.507 12:10:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:39.507 12:10:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.507 ************************************ 00:08:39.507 END TEST nvmf_connect_disconnect 00:08:39.507 ************************************ 00:08:39.767 12:10:08 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:39.767 12:10:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:39.767 12:10:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:39.767 12:10:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.767 ************************************ 00:08:39.767 START TEST nvmf_multitarget 00:08:39.767 ************************************ 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:39.767 * Looking for test storage... 00:08:39.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.767 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.768 12:10:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:46.409 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:46.409 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:46.409 Found net devices under 0000:af:00.0: cvl_0_0 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:46.409 Found net devices under 0000:af:00.1: cvl_0_1 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.409 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.669 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.669 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:08:46.670 00:08:46.670 --- 10.0.0.2 ping statistics --- 00:08:46.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.670 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:46.670 00:08:46.670 --- 10.0.0.1 ping statistics --- 00:08:46.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.670 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.670 12:10:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1994113 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1994113 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 1994113 ']' 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:46.670 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:46.670 [2024-05-15 12:10:15.078401] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:08:46.670 [2024-05-15 12:10:15.078450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.670 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.670 [2024-05-15 12:10:15.153318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.933 [2024-05-15 12:10:15.230801] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.933 [2024-05-15 12:10:15.230833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.933 [2024-05-15 12:10:15.230843] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.933 [2024-05-15 12:10:15.230852] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.933 [2024-05-15 12:10:15.230859] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.933 [2024-05-15 12:10:15.230902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.933 [2024-05-15 12:10:15.231017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.933 [2024-05-15 12:10:15.231046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.933 [2024-05-15 12:10:15.231048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:47.502 12:10:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:47.761 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:47.761 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:47.761 "nvmf_tgt_1" 00:08:47.761 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:47.761 "nvmf_tgt_2" 00:08:47.761 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:47.761 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:48.020 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:48.020 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:48.020 true 00:08:48.020 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:48.279 true 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.279 rmmod nvme_tcp 00:08:48.279 rmmod nvme_fabrics 00:08:48.279 rmmod nvme_keyring 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1994113 ']' 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1994113 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 1994113 ']' 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 1994113 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1994113 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1994113' 00:08:48.279 killing process with pid 1994113 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 1994113 00:08:48.279 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 1994113 00:08:48.539 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.539 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.539 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.539 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.539 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.539 12:10:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.539 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.539 12:10:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.078 12:10:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:51.078 00:08:51.078 real 0m10.973s 00:08:51.078 user 0m9.558s 00:08:51.078 sys 0m5.702s 00:08:51.078 12:10:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:51.078 12:10:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 ************************************ 00:08:51.078 END TEST nvmf_multitarget 00:08:51.078 ************************************ 00:08:51.078 12:10:19 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:51.078 12:10:19 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:51.078 12:10:19 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:51.078 12:10:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 ************************************ 00:08:51.078 START TEST nvmf_rpc 00:08:51.078 ************************************ 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:51.078 * Looking for test storage... 00:08:51.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.078 12:10:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.079 12:10:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:57.660 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:57.660 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:57.660 Found net devices under 0000:af:00.0: cvl_0_0 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.660 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:57.661 Found net devices under 0000:af:00.1: cvl_0_1 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.661 12:10:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:08:57.661 00:08:57.661 --- 10.0.0.2 ping statistics --- 00:08:57.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.661 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:57.661 00:08:57.661 --- 10.0.0.1 ping statistics --- 00:08:57.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.661 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1998116 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1998116 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 1998116 ']' 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:57.661 12:10:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.922 [2024-05-15 12:10:26.234676] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:08:57.922 [2024-05-15 12:10:26.234724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.922 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.922 [2024-05-15 12:10:26.308163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.922 [2024-05-15 12:10:26.377314] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.922 [2024-05-15 12:10:26.377355] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.922 [2024-05-15 12:10:26.377364] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.922 [2024-05-15 12:10:26.377373] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.922 [2024-05-15 12:10:26.377396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.922 [2024-05-15 12:10:26.377449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.922 [2024-05-15 12:10:26.377544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.922 [2024-05-15 12:10:26.377575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.922 [2024-05-15 12:10:26.377574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:58.861 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:58.861 "tick_rate": 2500000000, 00:08:58.861 "poll_groups": [ 00:08:58.861 { 00:08:58.861 "name": "nvmf_tgt_poll_group_000", 00:08:58.861 "admin_qpairs": 0, 00:08:58.861 "io_qpairs": 0, 00:08:58.861 "current_admin_qpairs": 0, 00:08:58.861 "current_io_qpairs": 0, 00:08:58.861 "pending_bdev_io": 0, 00:08:58.861 "completed_nvme_io": 0, 00:08:58.861 "transports": [] 00:08:58.861 }, 00:08:58.861 { 00:08:58.861 "name": "nvmf_tgt_poll_group_001", 00:08:58.861 "admin_qpairs": 0, 00:08:58.861 "io_qpairs": 0, 00:08:58.861 "current_admin_qpairs": 0, 00:08:58.861 "current_io_qpairs": 0, 00:08:58.861 "pending_bdev_io": 0, 00:08:58.861 "completed_nvme_io": 0, 00:08:58.861 "transports": [] 00:08:58.861 }, 00:08:58.861 { 00:08:58.861 "name": "nvmf_tgt_poll_group_002", 00:08:58.861 "admin_qpairs": 0, 00:08:58.861 "io_qpairs": 0, 00:08:58.861 "current_admin_qpairs": 0, 00:08:58.861 "current_io_qpairs": 0, 00:08:58.861 "pending_bdev_io": 0, 00:08:58.861 "completed_nvme_io": 0, 00:08:58.861 "transports": [] 00:08:58.861 }, 00:08:58.861 { 00:08:58.861 "name": "nvmf_tgt_poll_group_003", 00:08:58.861 "admin_qpairs": 0, 00:08:58.862 "io_qpairs": 0, 00:08:58.862 "current_admin_qpairs": 0, 00:08:58.862 "current_io_qpairs": 0, 00:08:58.862 "pending_bdev_io": 0, 00:08:58.862 "completed_nvme_io": 0, 00:08:58.862 "transports": [] 00:08:58.862 } 00:08:58.862 ] 00:08:58.862 }' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 [2024-05-15 12:10:27.208472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:58.862 "tick_rate": 2500000000, 00:08:58.862 "poll_groups": [ 00:08:58.862 { 00:08:58.862 "name": "nvmf_tgt_poll_group_000", 00:08:58.862 "admin_qpairs": 0, 00:08:58.862 "io_qpairs": 0, 00:08:58.862 "current_admin_qpairs": 0, 00:08:58.862 "current_io_qpairs": 0, 00:08:58.862 "pending_bdev_io": 0, 00:08:58.862 "completed_nvme_io": 0, 00:08:58.862 "transports": [ 00:08:58.862 { 00:08:58.862 "trtype": "TCP" 00:08:58.862 } 00:08:58.862 ] 00:08:58.862 }, 00:08:58.862 { 00:08:58.862 "name": "nvmf_tgt_poll_group_001", 00:08:58.862 "admin_qpairs": 0, 00:08:58.862 "io_qpairs": 0, 00:08:58.862 "current_admin_qpairs": 0, 00:08:58.862 "current_io_qpairs": 0, 00:08:58.862 "pending_bdev_io": 0, 00:08:58.862 "completed_nvme_io": 0, 00:08:58.862 "transports": [ 00:08:58.862 { 00:08:58.862 "trtype": "TCP" 00:08:58.862 } 00:08:58.862 ] 00:08:58.862 }, 00:08:58.862 { 00:08:58.862 "name": "nvmf_tgt_poll_group_002", 00:08:58.862 "admin_qpairs": 0, 00:08:58.862 "io_qpairs": 0, 00:08:58.862 "current_admin_qpairs": 0, 00:08:58.862 "current_io_qpairs": 0, 00:08:58.862 "pending_bdev_io": 0, 00:08:58.862 "completed_nvme_io": 0, 00:08:58.862 "transports": [ 00:08:58.862 { 00:08:58.862 "trtype": "TCP" 00:08:58.862 } 00:08:58.862 ] 00:08:58.862 }, 00:08:58.862 { 00:08:58.862 "name": "nvmf_tgt_poll_group_003", 00:08:58.862 "admin_qpairs": 0, 00:08:58.862 "io_qpairs": 0, 00:08:58.862 "current_admin_qpairs": 0, 00:08:58.862 "current_io_qpairs": 0, 00:08:58.862 "pending_bdev_io": 0, 00:08:58.862 "completed_nvme_io": 0, 00:08:58.862 "transports": [ 00:08:58.862 { 00:08:58.862 "trtype": "TCP" 00:08:58.862 } 00:08:58.862 ] 00:08:58.862 } 00:08:58.862 ] 00:08:58.862 }' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 Malloc1 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:58.862 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 [2024-05-15 12:10:27.387284] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:58.862 [2024-05-15 12:10:27.387601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:59.122 [2024-05-15 12:10:27.416274] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:59.122 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:59.122 could not add new controller: failed to write to nvme-fabrics device 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:59.122 12:10:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.503 12:10:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.503 12:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:09:00.503 12:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.503 12:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:09:00.503 12:10:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:09:02.411 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:09:02.670 12:10:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.670 [2024-05-15 12:10:30.984839] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:09:02.670 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:02.670 could not add new controller: failed to write to nvme-fabrics device 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:02.670 12:10:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.112 12:10:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.112 12:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:09:04.112 12:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.112 12:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:09:04.112 12:10:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.016 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.276 [2024-05-15 12:10:34.563723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:06.276 12:10:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.656 12:10:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.656 12:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:09:07.656 12:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.656 12:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:09:07.656 12:10:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:09:09.563 12:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:09:09.563 12:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:09:09.563 12:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.563 12:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:09:09.563 12:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.563 12:10:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:09:09.563 12:10:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 [2024-05-15 12:10:38.081997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.563 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.823 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.823 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.823 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.823 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.823 12:10:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.823 12:10:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.202 12:10:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.202 12:10:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:09:11.202 12:10:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.202 12:10:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:09:11.202 12:10:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.109 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.110 [2024-05-15 12:10:41.582725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:13.110 12:10:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.490 12:10:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.490 12:10:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:09:14.490 12:10:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.490 12:10:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:09:14.490 12:10:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:09:16.404 12:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:09:16.404 12:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:09:16.404 12:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.404 12:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:09:16.404 12:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.404 12:10:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:09:16.404 12:10:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.664 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.664 [2024-05-15 12:10:45.193594] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.923 12:10:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.302 12:10:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.302 12:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:09:18.302 12:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.302 12:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:09:18.302 12:10:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.210 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.470 [2024-05-15 12:10:48.741019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:20.470 12:10:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.849 12:10:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.849 12:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:09:21.849 12:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.849 12:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:09:21.849 12:10:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 [2024-05-15 12:10:52.294372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.783 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.044 [2024-05-15 12:10:52.342478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.044 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 [2024-05-15 12:10:52.394632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 [2024-05-15 12:10:52.442809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 [2024-05-15 12:10:52.490973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:24.045 "tick_rate": 2500000000, 00:09:24.045 "poll_groups": [ 00:09:24.045 { 00:09:24.045 "name": "nvmf_tgt_poll_group_000", 00:09:24.045 "admin_qpairs": 2, 00:09:24.045 "io_qpairs": 196, 00:09:24.045 "current_admin_qpairs": 0, 00:09:24.045 "current_io_qpairs": 0, 00:09:24.045 "pending_bdev_io": 0, 00:09:24.045 "completed_nvme_io": 295, 00:09:24.045 "transports": [ 00:09:24.045 { 00:09:24.045 "trtype": "TCP" 00:09:24.045 } 00:09:24.045 ] 00:09:24.045 }, 00:09:24.045 { 00:09:24.045 "name": "nvmf_tgt_poll_group_001", 00:09:24.045 "admin_qpairs": 2, 00:09:24.045 "io_qpairs": 196, 00:09:24.045 "current_admin_qpairs": 0, 00:09:24.045 "current_io_qpairs": 0, 00:09:24.045 "pending_bdev_io": 0, 00:09:24.045 "completed_nvme_io": 295, 00:09:24.045 "transports": [ 00:09:24.045 { 00:09:24.045 "trtype": "TCP" 00:09:24.045 } 00:09:24.045 ] 00:09:24.045 }, 00:09:24.045 { 00:09:24.045 "name": "nvmf_tgt_poll_group_002", 00:09:24.045 "admin_qpairs": 1, 00:09:24.045 "io_qpairs": 196, 00:09:24.045 "current_admin_qpairs": 0, 00:09:24.045 "current_io_qpairs": 0, 00:09:24.045 "pending_bdev_io": 0, 00:09:24.045 "completed_nvme_io": 248, 00:09:24.045 "transports": [ 00:09:24.045 { 00:09:24.045 "trtype": "TCP" 00:09:24.045 } 00:09:24.045 ] 00:09:24.045 }, 00:09:24.045 { 00:09:24.045 "name": "nvmf_tgt_poll_group_003", 00:09:24.045 "admin_qpairs": 2, 00:09:24.045 "io_qpairs": 196, 00:09:24.045 "current_admin_qpairs": 0, 00:09:24.045 "current_io_qpairs": 0, 00:09:24.045 "pending_bdev_io": 0, 00:09:24.045 "completed_nvme_io": 296, 00:09:24.045 "transports": [ 00:09:24.045 { 00:09:24.045 "trtype": "TCP" 00:09:24.045 } 00:09:24.045 ] 00:09:24.045 } 00:09:24.045 ] 00:09:24.045 }' 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:24.045 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.306 rmmod nvme_tcp 00:09:24.306 rmmod nvme_fabrics 00:09:24.306 rmmod nvme_keyring 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1998116 ']' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1998116 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 1998116 ']' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 1998116 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1998116 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1998116' 00:09:24.306 killing process with pid 1998116 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 1998116 00:09:24.306 [2024-05-15 12:10:52.776298] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:24.306 12:10:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 1998116 00:09:24.566 12:10:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.566 12:10:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.566 12:10:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.566 12:10:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.566 12:10:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.566 12:10:53 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.566 12:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.566 12:10:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.112 12:10:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.112 00:09:27.112 real 0m35.928s 00:09:27.112 user 1m47.026s 00:09:27.112 sys 0m8.295s 00:09:27.113 12:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:27.113 12:10:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.113 ************************************ 00:09:27.113 END TEST nvmf_rpc 00:09:27.113 ************************************ 00:09:27.113 12:10:55 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:27.113 12:10:55 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:27.113 12:10:55 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:27.113 12:10:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.113 ************************************ 00:09:27.113 START TEST nvmf_invalid 00:09:27.113 ************************************ 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:27.113 * Looking for test storage... 00:09:27.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.113 12:10:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.733 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:33.734 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:33.734 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:33.734 Found net devices under 0000:af:00.0: cvl_0_0 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:33.734 Found net devices under 0000:af:00.1: cvl_0_1 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:33.734 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:33.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:09:33.994 00:09:33.994 --- 10.0.0.2 ping statistics --- 00:09:33.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.994 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:09:33.994 00:09:33.994 --- 10.0.0.1 ping statistics --- 00:09:33.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.994 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2006653 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2006653 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 2006653 ']' 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:33.994 12:11:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.994 [2024-05-15 12:11:02.478523] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:09:33.994 [2024-05-15 12:11:02.478571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.994 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.253 [2024-05-15 12:11:02.553296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.253 [2024-05-15 12:11:02.622797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.253 [2024-05-15 12:11:02.622854] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.253 [2024-05-15 12:11:02.622865] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.253 [2024-05-15 12:11:02.622874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.253 [2024-05-15 12:11:02.622881] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.253 [2024-05-15 12:11:02.622941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.253 [2024-05-15 12:11:02.623058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.253 [2024-05-15 12:11:02.623085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.253 [2024-05-15 12:11:02.623087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.818 12:11:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:34.818 12:11:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:09:34.818 12:11:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.818 12:11:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:34.818 12:11:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:34.818 12:11:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.818 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:34.818 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6949 00:09:35.076 [2024-05-15 12:11:03.489533] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:35.076 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:35.076 { 00:09:35.076 "nqn": "nqn.2016-06.io.spdk:cnode6949", 00:09:35.076 "tgt_name": "foobar", 00:09:35.076 "method": "nvmf_create_subsystem", 00:09:35.076 "req_id": 1 00:09:35.076 } 00:09:35.076 Got JSON-RPC error response 00:09:35.076 response: 00:09:35.076 { 00:09:35.076 "code": -32603, 00:09:35.076 "message": "Unable to find target foobar" 00:09:35.076 }' 00:09:35.076 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:35.076 { 00:09:35.076 "nqn": "nqn.2016-06.io.spdk:cnode6949", 00:09:35.076 "tgt_name": "foobar", 00:09:35.076 "method": "nvmf_create_subsystem", 00:09:35.076 "req_id": 1 00:09:35.076 } 00:09:35.076 Got JSON-RPC error response 00:09:35.076 response: 00:09:35.076 { 00:09:35.076 "code": -32603, 00:09:35.076 "message": "Unable to find target foobar" 00:09:35.076 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:35.076 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:35.076 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19033 00:09:35.334 [2024-05-15 12:11:03.674218] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19033: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:35.334 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:35.334 { 00:09:35.334 "nqn": "nqn.2016-06.io.spdk:cnode19033", 00:09:35.334 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:35.334 "method": "nvmf_create_subsystem", 00:09:35.334 "req_id": 1 00:09:35.334 } 00:09:35.334 Got JSON-RPC error response 00:09:35.334 response: 00:09:35.334 { 00:09:35.334 "code": -32602, 00:09:35.334 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:35.334 }' 00:09:35.334 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:35.334 { 00:09:35.334 "nqn": "nqn.2016-06.io.spdk:cnode19033", 00:09:35.334 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:35.334 "method": "nvmf_create_subsystem", 00:09:35.334 "req_id": 1 00:09:35.334 } 00:09:35.334 Got JSON-RPC error response 00:09:35.334 response: 00:09:35.334 { 00:09:35.334 "code": -32602, 00:09:35.334 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:35.334 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:35.334 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:35.334 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17524 00:09:35.594 [2024-05-15 12:11:03.866798] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17524: invalid model number 'SPDK_Controller' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:35.594 { 00:09:35.594 "nqn": "nqn.2016-06.io.spdk:cnode17524", 00:09:35.594 "model_number": "SPDK_Controller\u001f", 00:09:35.594 "method": "nvmf_create_subsystem", 00:09:35.594 "req_id": 1 00:09:35.594 } 00:09:35.594 Got JSON-RPC error response 00:09:35.594 response: 00:09:35.594 { 00:09:35.594 "code": -32602, 00:09:35.594 "message": "Invalid MN SPDK_Controller\u001f" 00:09:35.594 }' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:35.594 { 00:09:35.594 "nqn": "nqn.2016-06.io.spdk:cnode17524", 00:09:35.594 "model_number": "SPDK_Controller\u001f", 00:09:35.594 "method": "nvmf_create_subsystem", 00:09:35.594 "req_id": 1 00:09:35.594 } 00:09:35.594 Got JSON-RPC error response 00:09:35.594 response: 00:09:35.594 { 00:09:35.594 "code": -32602, 00:09:35.594 "message": "Invalid MN SPDK_Controller\u001f" 00:09:35.594 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'B8#wb]MaK+^(d"uhCS)4.' 00:09:35.594 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'B8#wb]MaK+^(d"uhCS)4.' nqn.2016-06.io.spdk:cnode5235 00:09:35.854 [2024-05-15 12:11:04.223969] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5235: invalid serial number 'B8#wb]MaK+^(d"uhCS)4.' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:35.854 { 00:09:35.854 "nqn": "nqn.2016-06.io.spdk:cnode5235", 00:09:35.854 "serial_number": "B8#wb]MaK+^(d\"uhCS)4.", 00:09:35.854 "method": "nvmf_create_subsystem", 00:09:35.854 "req_id": 1 00:09:35.854 } 00:09:35.854 Got JSON-RPC error response 00:09:35.854 response: 00:09:35.854 { 00:09:35.854 "code": -32602, 00:09:35.854 "message": "Invalid SN B8#wb]MaK+^(d\"uhCS)4." 00:09:35.854 }' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:35.854 { 00:09:35.854 "nqn": "nqn.2016-06.io.spdk:cnode5235", 00:09:35.854 "serial_number": "B8#wb]MaK+^(d\"uhCS)4.", 00:09:35.854 "method": "nvmf_create_subsystem", 00:09:35.854 "req_id": 1 00:09:35.854 } 00:09:35.854 Got JSON-RPC error response 00:09:35.854 response: 00:09:35.854 { 00:09:35.854 "code": -32602, 00:09:35.854 "message": "Invalid SN B8#wb]MaK+^(d\"uhCS)4." 00:09:35.854 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.854 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.855 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.114 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '2Dr7)(p:>>&-@.0nuiNGa}Klx.' 00:09:36.115 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '2Dr7)(p:>>&-@.0nuiNGa}Klx.' nqn.2016-06.io.spdk:cnode6858 00:09:36.374 [2024-05-15 12:11:04.721645] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6858: invalid model number '2Dr7)(p:>>&-@.0nuiNGa}Klx.' 00:09:36.374 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:36.374 { 00:09:36.374 "nqn": "nqn.2016-06.io.spdk:cnode6858", 00:09:36.374 "model_number": "2Dr7)(p:>>&-@.0nu\u007fiNGa}Klx.", 00:09:36.374 "method": "nvmf_create_subsystem", 00:09:36.374 "req_id": 1 00:09:36.374 } 00:09:36.374 Got JSON-RPC error response 00:09:36.374 response: 00:09:36.374 { 00:09:36.374 "code": -32602, 00:09:36.374 "message": "Invalid MN 2Dr7)(p:>>&-@.0nu\u007fiNGa}Klx." 00:09:36.374 }' 00:09:36.374 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:36.374 { 00:09:36.374 "nqn": "nqn.2016-06.io.spdk:cnode6858", 00:09:36.374 "model_number": "2Dr7)(p:>>&-@.0nu\u007fiNGa}Klx.", 00:09:36.374 "method": "nvmf_create_subsystem", 00:09:36.374 "req_id": 1 00:09:36.374 } 00:09:36.374 Got JSON-RPC error response 00:09:36.374 response: 00:09:36.374 { 00:09:36.374 "code": -32602, 00:09:36.374 "message": "Invalid MN 2Dr7)(p:>>&-@.0nu\u007fiNGa}Klx." 00:09:36.374 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:36.374 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:36.374 [2024-05-15 12:11:04.902331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.632 12:11:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:36.632 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:36.632 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:36.632 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:36.632 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:36.632 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:36.891 [2024-05-15 12:11:05.283563] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:36.891 [2024-05-15 12:11:05.283630] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:36.891 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:36.891 { 00:09:36.891 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:36.891 "listen_address": { 00:09:36.891 "trtype": "tcp", 00:09:36.891 "traddr": "", 00:09:36.891 "trsvcid": "4421" 00:09:36.891 }, 00:09:36.891 "method": "nvmf_subsystem_remove_listener", 00:09:36.891 "req_id": 1 00:09:36.891 } 00:09:36.891 Got JSON-RPC error response 00:09:36.891 response: 00:09:36.891 { 00:09:36.891 "code": -32602, 00:09:36.891 "message": "Invalid parameters" 00:09:36.891 }' 00:09:36.891 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:36.891 { 00:09:36.891 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:36.891 "listen_address": { 00:09:36.891 "trtype": "tcp", 00:09:36.891 "traddr": "", 00:09:36.891 "trsvcid": "4421" 00:09:36.891 }, 00:09:36.891 "method": "nvmf_subsystem_remove_listener", 00:09:36.891 "req_id": 1 00:09:36.891 } 00:09:36.891 Got JSON-RPC error response 00:09:36.891 response: 00:09:36.891 { 00:09:36.891 "code": -32602, 00:09:36.891 "message": "Invalid parameters" 00:09:36.891 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:36.891 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24704 -i 0 00:09:37.150 [2024-05-15 12:11:05.480247] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24704: invalid cntlid range [0-65519] 00:09:37.150 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:37.150 { 00:09:37.150 "nqn": "nqn.2016-06.io.spdk:cnode24704", 00:09:37.150 "min_cntlid": 0, 00:09:37.150 "method": "nvmf_create_subsystem", 00:09:37.150 "req_id": 1 00:09:37.150 } 00:09:37.150 Got JSON-RPC error response 00:09:37.150 response: 00:09:37.150 { 00:09:37.150 "code": -32602, 00:09:37.150 "message": "Invalid cntlid range [0-65519]" 00:09:37.150 }' 00:09:37.150 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:37.150 { 00:09:37.150 "nqn": "nqn.2016-06.io.spdk:cnode24704", 00:09:37.150 "min_cntlid": 0, 00:09:37.150 "method": "nvmf_create_subsystem", 00:09:37.150 "req_id": 1 00:09:37.150 } 00:09:37.150 Got JSON-RPC error response 00:09:37.150 response: 00:09:37.150 { 00:09:37.150 "code": -32602, 00:09:37.150 "message": "Invalid cntlid range [0-65519]" 00:09:37.150 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.150 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20759 -i 65520 00:09:37.150 [2024-05-15 12:11:05.672905] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20759: invalid cntlid range [65520-65519] 00:09:37.409 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:37.409 { 00:09:37.409 "nqn": "nqn.2016-06.io.spdk:cnode20759", 00:09:37.409 "min_cntlid": 65520, 00:09:37.409 "method": "nvmf_create_subsystem", 00:09:37.409 "req_id": 1 00:09:37.409 } 00:09:37.409 Got JSON-RPC error response 00:09:37.409 response: 00:09:37.409 { 00:09:37.409 "code": -32602, 00:09:37.409 "message": "Invalid cntlid range [65520-65519]" 00:09:37.409 }' 00:09:37.409 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:37.409 { 00:09:37.409 "nqn": "nqn.2016-06.io.spdk:cnode20759", 00:09:37.409 "min_cntlid": 65520, 00:09:37.409 "method": "nvmf_create_subsystem", 00:09:37.409 "req_id": 1 00:09:37.409 } 00:09:37.409 Got JSON-RPC error response 00:09:37.409 response: 00:09:37.409 { 00:09:37.409 "code": -32602, 00:09:37.409 "message": "Invalid cntlid range [65520-65519]" 00:09:37.409 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.409 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31643 -I 0 00:09:37.409 [2024-05-15 12:11:05.861528] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31643: invalid cntlid range [1-0] 00:09:37.409 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:37.409 { 00:09:37.409 "nqn": "nqn.2016-06.io.spdk:cnode31643", 00:09:37.409 "max_cntlid": 0, 00:09:37.409 "method": "nvmf_create_subsystem", 00:09:37.409 "req_id": 1 00:09:37.409 } 00:09:37.409 Got JSON-RPC error response 00:09:37.409 response: 00:09:37.409 { 00:09:37.409 "code": -32602, 00:09:37.409 "message": "Invalid cntlid range [1-0]" 00:09:37.409 }' 00:09:37.409 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:37.409 { 00:09:37.409 "nqn": "nqn.2016-06.io.spdk:cnode31643", 00:09:37.409 "max_cntlid": 0, 00:09:37.409 "method": "nvmf_create_subsystem", 00:09:37.409 "req_id": 1 00:09:37.409 } 00:09:37.409 Got JSON-RPC error response 00:09:37.409 response: 00:09:37.409 { 00:09:37.409 "code": -32602, 00:09:37.409 "message": "Invalid cntlid range [1-0]" 00:09:37.409 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.409 12:11:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8018 -I 65520 00:09:37.668 [2024-05-15 12:11:06.050152] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8018: invalid cntlid range [1-65520] 00:09:37.668 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:37.668 { 00:09:37.668 "nqn": "nqn.2016-06.io.spdk:cnode8018", 00:09:37.668 "max_cntlid": 65520, 00:09:37.668 "method": "nvmf_create_subsystem", 00:09:37.668 "req_id": 1 00:09:37.668 } 00:09:37.668 Got JSON-RPC error response 00:09:37.668 response: 00:09:37.668 { 00:09:37.668 "code": -32602, 00:09:37.668 "message": "Invalid cntlid range [1-65520]" 00:09:37.668 }' 00:09:37.668 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:37.668 { 00:09:37.668 "nqn": "nqn.2016-06.io.spdk:cnode8018", 00:09:37.668 "max_cntlid": 65520, 00:09:37.668 "method": "nvmf_create_subsystem", 00:09:37.668 "req_id": 1 00:09:37.668 } 00:09:37.668 Got JSON-RPC error response 00:09:37.668 response: 00:09:37.668 { 00:09:37.668 "code": -32602, 00:09:37.668 "message": "Invalid cntlid range [1-65520]" 00:09:37.668 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.668 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9046 -i 6 -I 5 00:09:37.927 [2024-05-15 12:11:06.234802] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9046: invalid cntlid range [6-5] 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:37.927 { 00:09:37.927 "nqn": "nqn.2016-06.io.spdk:cnode9046", 00:09:37.927 "min_cntlid": 6, 00:09:37.927 "max_cntlid": 5, 00:09:37.927 "method": "nvmf_create_subsystem", 00:09:37.927 "req_id": 1 00:09:37.927 } 00:09:37.927 Got JSON-RPC error response 00:09:37.927 response: 00:09:37.927 { 00:09:37.927 "code": -32602, 00:09:37.927 "message": "Invalid cntlid range [6-5]" 00:09:37.927 }' 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:37.927 { 00:09:37.927 "nqn": "nqn.2016-06.io.spdk:cnode9046", 00:09:37.927 "min_cntlid": 6, 00:09:37.927 "max_cntlid": 5, 00:09:37.927 "method": "nvmf_create_subsystem", 00:09:37.927 "req_id": 1 00:09:37.927 } 00:09:37.927 Got JSON-RPC error response 00:09:37.927 response: 00:09:37.927 { 00:09:37.927 "code": -32602, 00:09:37.927 "message": "Invalid cntlid range [6-5]" 00:09:37.927 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:37.927 { 00:09:37.927 "name": "foobar", 00:09:37.927 "method": "nvmf_delete_target", 00:09:37.927 "req_id": 1 00:09:37.927 } 00:09:37.927 Got JSON-RPC error response 00:09:37.927 response: 00:09:37.927 { 00:09:37.927 "code": -32602, 00:09:37.927 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:37.927 }' 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:37.927 { 00:09:37.927 "name": "foobar", 00:09:37.927 "method": "nvmf_delete_target", 00:09:37.927 "req_id": 1 00:09:37.927 } 00:09:37.927 Got JSON-RPC error response 00:09:37.927 response: 00:09:37.927 { 00:09:37.927 "code": -32602, 00:09:37.927 "message": "The specified target doesn't exist, cannot delete it." 00:09:37.927 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.927 rmmod nvme_tcp 00:09:37.927 rmmod nvme_fabrics 00:09:37.927 rmmod nvme_keyring 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2006653 ']' 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2006653 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 2006653 ']' 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 2006653 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:37.927 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2006653 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2006653' 00:09:38.187 killing process with pid 2006653 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 2006653 00:09:38.187 [2024-05-15 12:11:06.497771] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 2006653 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.187 12:11:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.724 12:11:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.724 00:09:40.724 real 0m13.600s 00:09:40.724 user 0m20.355s 00:09:40.724 sys 0m6.579s 00:09:40.724 12:11:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:40.724 12:11:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:40.724 ************************************ 00:09:40.724 END TEST nvmf_invalid 00:09:40.724 ************************************ 00:09:40.724 12:11:08 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:40.724 12:11:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:40.724 12:11:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:40.724 12:11:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.724 ************************************ 00:09:40.724 START TEST nvmf_abort 00:09:40.724 ************************************ 00:09:40.724 12:11:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:40.724 * Looking for test storage... 00:09:40.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.724 12:11:08 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.724 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:40.724 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.725 12:11:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.725 12:11:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:47.335 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:47.335 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:47.335 Found net devices under 0000:af:00.0: cvl_0_0 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:47.335 Found net devices under 0000:af:00.1: cvl_0_1 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.335 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.336 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:09:47.595 00:09:47.595 --- 10.0.0.2 ping statistics --- 00:09:47.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.595 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:09:47.595 00:09:47.595 --- 10.0.0.1 ping statistics --- 00:09:47.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.595 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.595 12:11:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2011203 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2011203 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 2011203 ']' 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:09:47.595 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 [2024-05-15 12:11:16.077214] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:09:47.595 [2024-05-15 12:11:16.077259] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.595 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.854 [2024-05-15 12:11:16.150851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.854 [2024-05-15 12:11:16.218469] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.854 [2024-05-15 12:11:16.218512] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.854 [2024-05-15 12:11:16.218522] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.854 [2024-05-15 12:11:16.218530] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.854 [2024-05-15 12:11:16.218553] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.854 [2024-05-15 12:11:16.218657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.855 [2024-05-15 12:11:16.218686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.855 [2024-05-15 12:11:16.218688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.423 [2024-05-15 12:11:16.934962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:48.423 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.683 Malloc0 00:09:48.683 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:48.683 12:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:48.683 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:48.683 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.683 Delay0 00:09:48.683 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:48.683 12:11:16 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:48.683 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:48.683 12:11:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.683 [2024-05-15 12:11:17.015684] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:48.683 [2024-05-15 12:11:17.015937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:48.683 12:11:17 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:48.683 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.683 [2024-05-15 12:11:17.092900] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:51.220 Initializing NVMe Controllers 00:09:51.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:51.220 controller IO queue size 128 less than required 00:09:51.220 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:51.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:51.220 Initialization complete. Launching workers. 00:09:51.220 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41773 00:09:51.220 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41834, failed to submit 62 00:09:51.220 success 41777, unsuccess 57, failed 0 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:51.220 rmmod nvme_tcp 00:09:51.220 rmmod nvme_fabrics 00:09:51.220 rmmod nvme_keyring 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2011203 ']' 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2011203 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 2011203 ']' 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 2011203 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2011203 00:09:51.220 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2011203' 00:09:51.221 killing process with pid 2011203 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 2011203 00:09:51.221 [2024-05-15 12:11:19.331078] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 2011203 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.221 12:11:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.130 12:11:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.130 00:09:53.130 real 0m12.758s 00:09:53.130 user 0m13.298s 00:09:53.130 sys 0m6.575s 00:09:53.130 12:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:53.130 12:11:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.130 ************************************ 00:09:53.130 END TEST nvmf_abort 00:09:53.130 ************************************ 00:09:53.390 12:11:21 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:53.390 12:11:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:09:53.390 12:11:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:53.390 12:11:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:53.390 ************************************ 00:09:53.390 START TEST nvmf_ns_hotplug_stress 00:09:53.390 ************************************ 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:53.390 * Looking for test storage... 00:09:53.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.390 12:11:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:59.964 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:59.964 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:59.964 Found net devices under 0000:af:00.0: cvl_0_0 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:59.964 Found net devices under 0000:af:00.1: cvl_0_1 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.964 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.965 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.965 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.965 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.965 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.965 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.965 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.965 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:10:00.224 00:10:00.224 --- 10.0.0.2 ping statistics --- 00:10:00.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.224 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:10:00.224 00:10:00.224 --- 10.0.0.1 ping statistics --- 00:10:00.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.224 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.224 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2015643 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2015643 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 2015643 ']' 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:00.483 12:11:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.483 [2024-05-15 12:11:28.809640] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:10:00.484 [2024-05-15 12:11:28.809685] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.484 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.484 [2024-05-15 12:11:28.882209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.484 [2024-05-15 12:11:28.956520] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.484 [2024-05-15 12:11:28.956553] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.484 [2024-05-15 12:11:28.956563] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.484 [2024-05-15 12:11:28.956572] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.484 [2024-05-15 12:11:28.956579] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.484 [2024-05-15 12:11:28.956679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.484 [2024-05-15 12:11:28.956708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.484 [2024-05-15 12:11:28.956710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:01.422 [2024-05-15 12:11:29.820552] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.422 12:11:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.681 12:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.681 [2024-05-15 12:11:30.198077] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:01.681 [2024-05-15 12:11:30.198352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.941 12:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.941 12:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:02.200 Malloc0 00:10:02.200 12:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:02.459 Delay0 00:10:02.459 12:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.459 12:11:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:02.719 NULL1 00:10:02.719 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:02.979 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:02.979 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2015988 00:10:02.979 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:02.979 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.979 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.239 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.239 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:03.239 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:03.498 true 00:10:03.498 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:03.498 12:11:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.758 12:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.758 12:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:03.758 12:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:04.017 true 00:10:04.017 12:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:04.017 12:11:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.398 Read completed with error (sct=0, sc=11) 00:10:05.398 12:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.398 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.398 12:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:05.398 12:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:05.658 true 00:10:05.658 12:11:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:05.658 12:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.595 12:11:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.595 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:06.595 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:06.855 true 00:10:06.855 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:06.855 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.114 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.114 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:07.114 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:07.373 true 00:10:07.373 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:07.373 12:11:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.606 12:11:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:08.606 12:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:08.606 12:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:08.865 true 00:10:08.865 12:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:08.865 12:11:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.803 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.803 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:09.803 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:10.062 true 00:10:10.062 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:10.062 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.320 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.320 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:10.320 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:10.579 true 00:10:10.579 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:10.579 12:11:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.838 12:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.838 12:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:10.838 12:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:11.098 true 00:10:11.098 12:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:11.098 12:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.358 12:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.358 12:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:11.358 12:11:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:11.617 true 00:10:11.617 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:11.617 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.876 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.876 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:11.876 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:12.135 true 00:10:12.135 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:12.135 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.394 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.394 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:12.394 12:11:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:12.653 true 00:10:12.653 12:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:12.653 12:11:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.033 12:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.033 12:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:14.033 12:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:14.293 true 00:10:14.293 12:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:14.293 12:11:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.230 12:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.230 12:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:15.230 12:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:15.489 true 00:10:15.489 12:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:15.489 12:11:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.749 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.749 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:15.749 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:16.008 true 00:10:16.008 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:16.008 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.268 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.268 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:16.268 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:16.528 true 00:10:16.528 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:16.528 12:11:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.787 12:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.787 12:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:16.787 12:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:17.046 true 00:10:17.046 12:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:17.046 12:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.305 12:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.565 12:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:17.565 12:11:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:17.565 true 00:10:17.565 12:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:17.565 12:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.824 12:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.083 12:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:18.083 12:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:18.083 true 00:10:18.083 12:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:18.083 12:11:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.463 12:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.463 12:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:19.463 12:11:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:19.722 true 00:10:19.722 12:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:19.722 12:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.661 12:11:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.661 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:20.661 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:20.920 true 00:10:20.920 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:20.920 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.180 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.180 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:21.180 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:21.439 true 00:10:21.439 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:21.439 12:11:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.698 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.698 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:21.698 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:21.957 true 00:10:21.957 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:21.958 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.217 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.476 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:22.476 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:22.476 true 00:10:22.476 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:22.476 12:11:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.904 12:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.904 12:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:23.904 12:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:23.904 true 00:10:23.904 12:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:23.904 12:11:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.842 12:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.101 12:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:25.101 12:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:25.101 true 00:10:25.101 12:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:25.101 12:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.360 12:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.619 12:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:25.619 12:11:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:25.619 true 00:10:25.619 12:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:25.619 12:11:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.999 12:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.258 12:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:27.258 12:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:27.258 true 00:10:27.258 12:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:27.258 12:11:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.196 12:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.456 12:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:28.456 12:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:28.456 true 00:10:28.456 12:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:28.456 12:11:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.714 12:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.973 12:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:28.973 12:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:28.973 true 00:10:28.973 12:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:28.973 12:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.232 12:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.492 12:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:29.492 12:11:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:29.492 true 00:10:29.492 12:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:29.492 12:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.751 12:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.011 12:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:30.011 12:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:30.011 true 00:10:30.270 12:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:30.270 12:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.208 12:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.467 12:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:31.467 12:11:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:31.727 true 00:10:31.727 12:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:31.727 12:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.666 12:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.666 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:32.666 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:32.925 true 00:10:32.925 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:32.925 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.185 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.185 Initializing NVMe Controllers 00:10:33.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:33.185 Controller IO queue size 128, less than required. 00:10:33.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:33.185 Controller IO queue size 128, less than required. 00:10:33.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:33.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:33.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:33.185 Initialization complete. Launching workers. 00:10:33.185 ======================================================== 00:10:33.185 Latency(us) 00:10:33.185 Device Information : IOPS MiB/s Average min max 00:10:33.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1466.42 0.72 46444.90 1912.75 1100417.45 00:10:33.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14045.11 6.86 9093.25 2192.30 289698.02 00:10:33.185 ======================================================== 00:10:33.185 Total : 15511.52 7.57 12624.37 1912.75 1100417.45 00:10:33.185 00:10:33.185 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:33.185 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:33.444 true 00:10:33.444 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2015988 00:10:33.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2015988) - No such process 00:10:33.444 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2015988 00:10:33.444 12:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.703 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.703 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:33.703 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:33.703 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:33.703 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.703 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:33.963 null0 00:10:33.963 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.963 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.963 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:34.221 null1 00:10:34.221 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.222 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.222 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:34.480 null2 00:10:34.480 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.480 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.480 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:34.480 null3 00:10:34.480 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.480 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.480 12:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:34.748 null4 00:10:34.748 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.748 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.748 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:35.008 null5 00:10:35.008 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.008 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.008 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:35.008 null6 00:10:35.008 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.008 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.008 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:35.268 null7 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2021798 2021801 2021802 2021805 2021807 2021809 2021811 2021812 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.268 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.528 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.528 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.528 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.528 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.528 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.528 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.528 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.528 12:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.787 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.047 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.305 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.305 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.305 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.305 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.305 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.305 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.305 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.305 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.564 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.565 12:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.565 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.565 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.565 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.565 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.565 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.565 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.565 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.565 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.824 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.082 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.083 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.341 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.342 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.601 12:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.909 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.180 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.439 12:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.698 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:38.957 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:38.957 rmmod nvme_tcp 00:10:39.216 rmmod nvme_fabrics 00:10:39.216 rmmod nvme_keyring 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2015643 ']' 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2015643 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 2015643 ']' 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 2015643 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2015643 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2015643' 00:10:39.216 killing process with pid 2015643 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 2015643 00:10:39.216 [2024-05-15 12:12:07.584774] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:39.216 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 2015643 00:10:39.476 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.476 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:39.476 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:39.476 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.476 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.476 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.476 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.476 12:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.384 12:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:41.384 00:10:41.384 real 0m48.147s 00:10:41.384 user 3m8.337s 00:10:41.384 sys 0m21.078s 00:10:41.384 12:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:41.384 12:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.384 ************************************ 00:10:41.384 END TEST nvmf_ns_hotplug_stress 00:10:41.384 ************************************ 00:10:41.643 12:12:09 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:41.643 12:12:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:10:41.643 12:12:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:41.643 12:12:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:41.643 ************************************ 00:10:41.643 START TEST nvmf_connect_stress 00:10:41.643 ************************************ 00:10:41.643 12:12:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:41.643 * Looking for test storage... 00:10:41.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:41.643 12:12:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:49.775 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:49.775 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:49.775 Found net devices under 0000:af:00.0: cvl_0_0 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:49.775 Found net devices under 0000:af:00.1: cvl_0_1 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.775 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:49.776 12:12:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:49.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:10:49.776 00:10:49.776 --- 10.0.0.2 ping statistics --- 00:10:49.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.776 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:10:49.776 00:10:49.776 --- 10.0.0.1 ping statistics --- 00:10:49.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.776 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2026879 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2026879 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 2026879 ']' 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.776 [2024-05-15 12:12:17.163288] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:10:49.776 [2024-05-15 12:12:17.163332] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.776 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.776 [2024-05-15 12:12:17.237386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.776 [2024-05-15 12:12:17.309753] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.776 [2024-05-15 12:12:17.309786] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.776 [2024-05-15 12:12:17.309795] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.776 [2024-05-15 12:12:17.309804] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.776 [2024-05-15 12:12:17.309827] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.776 [2024-05-15 12:12:17.309922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.776 [2024-05-15 12:12:17.309950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.776 [2024-05-15 12:12:17.309952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:49.776 12:12:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.776 [2024-05-15 12:12:18.029439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.776 [2024-05-15 12:12:18.049861] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:49.776 [2024-05-15 12:12:18.070319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.776 NULL1 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2027158 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.776 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.777 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.036 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.036 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:50.036 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.036 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.036 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.604 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.604 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:50.604 12:12:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.604 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.604 12:12:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.864 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.864 12:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:50.864 12:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.864 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.864 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.123 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.123 12:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:51.123 12:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.123 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.123 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.382 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.382 12:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:51.382 12:12:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.382 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.382 12:12:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.641 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.641 12:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:51.641 12:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.641 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.641 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.208 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.208 12:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:52.208 12:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.208 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.208 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.467 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.467 12:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:52.467 12:12:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.467 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.467 12:12:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.725 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.725 12:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:52.725 12:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.725 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.725 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.984 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.984 12:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:52.984 12:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.984 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.984 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.243 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.243 12:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:53.243 12:12:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.243 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.243 12:12:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.810 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.810 12:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:53.810 12:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.810 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.810 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.068 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.068 12:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:54.068 12:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.068 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.068 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.327 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.327 12:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:54.327 12:12:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.327 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.327 12:12:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.585 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.585 12:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:54.585 12:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.585 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.585 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.154 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.154 12:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:55.154 12:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.154 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.154 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.412 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.412 12:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:55.412 12:12:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.412 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.412 12:12:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.671 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.671 12:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:55.671 12:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.671 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.671 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.929 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.929 12:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:55.929 12:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.929 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.929 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.188 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.188 12:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:56.188 12:12:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.188 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.188 12:12:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.755 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.755 12:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:56.755 12:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.755 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.755 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.046 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.046 12:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:57.046 12:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.046 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.046 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.318 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.318 12:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:57.318 12:12:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.318 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.318 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.577 12:12:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.577 12:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:57.577 12:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.577 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.577 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.835 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.835 12:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:57.835 12:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.835 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.835 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.402 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.402 12:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:58.402 12:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.402 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.402 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.661 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.661 12:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:58.661 12:12:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.661 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.661 12:12:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.919 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.919 12:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:58.919 12:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.919 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.919 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.178 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.178 12:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:59.178 12:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.178 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.178 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.437 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.437 12:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:59.437 12:12:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.437 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.437 12:12:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.695 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2027158 00:10:59.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2027158) - No such process 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2027158 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.954 rmmod nvme_tcp 00:10:59.954 rmmod nvme_fabrics 00:10:59.954 rmmod nvme_keyring 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2026879 ']' 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2026879 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 2026879 ']' 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 2026879 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2026879 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2026879' 00:10:59.954 killing process with pid 2026879 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 2026879 00:10:59.954 [2024-05-15 12:12:28.396335] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:59.954 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 2026879 00:11:00.214 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.214 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.214 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.214 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.214 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.214 12:12:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.214 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.214 12:12:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.749 12:12:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.749 00:11:02.749 real 0m20.711s 00:11:02.749 user 0m40.713s 00:11:02.749 sys 0m10.226s 00:11:02.749 12:12:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:02.749 12:12:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.749 ************************************ 00:11:02.749 END TEST nvmf_connect_stress 00:11:02.749 ************************************ 00:11:02.749 12:12:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:02.749 12:12:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:02.749 12:12:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:02.749 12:12:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.749 ************************************ 00:11:02.749 START TEST nvmf_fused_ordering 00:11:02.749 ************************************ 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:02.749 * Looking for test storage... 00:11:02.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.749 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.750 12:12:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.333 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:09.334 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:09.334 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:09.334 Found net devices under 0000:af:00.0: cvl_0_0 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:09.334 Found net devices under 0000:af:00.1: cvl_0_1 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:11:09.334 00:11:09.334 --- 10.0.0.2 ping statistics --- 00:11:09.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.334 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:11:09.334 00:11:09.334 --- 10.0.0.1 ping statistics --- 00:11:09.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.334 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.334 12:12:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2032588 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2032588 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 2032588 ']' 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:09.335 12:12:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:09.594 [2024-05-15 12:12:37.905364] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:11:09.594 [2024-05-15 12:12:37.905413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.594 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.594 [2024-05-15 12:12:37.979822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.594 [2024-05-15 12:12:38.046920] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.594 [2024-05-15 12:12:38.046962] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.594 [2024-05-15 12:12:38.046971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.594 [2024-05-15 12:12:38.046979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.594 [2024-05-15 12:12:38.047002] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.594 [2024-05-15 12:12:38.047029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.161 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:10.161 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:11:10.420 12:12:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 [2024-05-15 12:12:38.741537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 [2024-05-15 12:12:38.761531] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:10.421 [2024-05-15 12:12:38.761741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 NULL1 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.421 12:12:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:10.421 [2024-05-15 12:12:38.817770] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:11:10.421 [2024-05-15 12:12:38.817808] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032756 ] 00:11:10.421 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.990 Attached to nqn.2016-06.io.spdk:cnode1 00:11:10.990 Namespace ID: 1 size: 1GB 00:11:10.990 fused_ordering(0) 00:11:10.990 fused_ordering(1) 00:11:10.990 fused_ordering(2) 00:11:10.990 fused_ordering(3) 00:11:10.990 fused_ordering(4) 00:11:10.990 fused_ordering(5) 00:11:10.990 fused_ordering(6) 00:11:10.990 fused_ordering(7) 00:11:10.990 fused_ordering(8) 00:11:10.990 fused_ordering(9) 00:11:10.990 fused_ordering(10) 00:11:10.990 fused_ordering(11) 00:11:10.990 fused_ordering(12) 00:11:10.990 fused_ordering(13) 00:11:10.990 fused_ordering(14) 00:11:10.990 fused_ordering(15) 00:11:10.990 fused_ordering(16) 00:11:10.990 fused_ordering(17) 00:11:10.990 fused_ordering(18) 00:11:10.990 fused_ordering(19) 00:11:10.990 fused_ordering(20) 00:11:10.990 fused_ordering(21) 00:11:10.990 fused_ordering(22) 00:11:10.990 fused_ordering(23) 00:11:10.990 fused_ordering(24) 00:11:10.990 fused_ordering(25) 00:11:10.990 fused_ordering(26) 00:11:10.990 fused_ordering(27) 00:11:10.990 fused_ordering(28) 00:11:10.990 fused_ordering(29) 00:11:10.990 fused_ordering(30) 00:11:10.990 fused_ordering(31) 00:11:10.990 fused_ordering(32) 00:11:10.990 fused_ordering(33) 00:11:10.990 fused_ordering(34) 00:11:10.990 fused_ordering(35) 00:11:10.990 fused_ordering(36) 00:11:10.990 fused_ordering(37) 00:11:10.990 fused_ordering(38) 00:11:10.990 fused_ordering(39) 00:11:10.990 fused_ordering(40) 00:11:10.990 fused_ordering(41) 00:11:10.990 fused_ordering(42) 00:11:10.990 fused_ordering(43) 00:11:10.990 fused_ordering(44) 00:11:10.990 fused_ordering(45) 00:11:10.990 fused_ordering(46) 00:11:10.990 fused_ordering(47) 00:11:10.990 fused_ordering(48) 00:11:10.990 fused_ordering(49) 00:11:10.990 fused_ordering(50) 00:11:10.990 fused_ordering(51) 00:11:10.990 fused_ordering(52) 00:11:10.990 fused_ordering(53) 00:11:10.990 fused_ordering(54) 00:11:10.990 fused_ordering(55) 00:11:10.990 fused_ordering(56) 00:11:10.990 fused_ordering(57) 00:11:10.990 fused_ordering(58) 00:11:10.990 fused_ordering(59) 00:11:10.990 fused_ordering(60) 00:11:10.990 fused_ordering(61) 00:11:10.990 fused_ordering(62) 00:11:10.990 fused_ordering(63) 00:11:10.990 fused_ordering(64) 00:11:10.990 fused_ordering(65) 00:11:10.990 fused_ordering(66) 00:11:10.990 fused_ordering(67) 00:11:10.990 fused_ordering(68) 00:11:10.990 fused_ordering(69) 00:11:10.990 fused_ordering(70) 00:11:10.990 fused_ordering(71) 00:11:10.990 fused_ordering(72) 00:11:10.990 fused_ordering(73) 00:11:10.990 fused_ordering(74) 00:11:10.990 fused_ordering(75) 00:11:10.990 fused_ordering(76) 00:11:10.990 fused_ordering(77) 00:11:10.990 fused_ordering(78) 00:11:10.990 fused_ordering(79) 00:11:10.990 fused_ordering(80) 00:11:10.990 fused_ordering(81) 00:11:10.990 fused_ordering(82) 00:11:10.990 fused_ordering(83) 00:11:10.990 fused_ordering(84) 00:11:10.990 fused_ordering(85) 00:11:10.990 fused_ordering(86) 00:11:10.990 fused_ordering(87) 00:11:10.990 fused_ordering(88) 00:11:10.990 fused_ordering(89) 00:11:10.990 fused_ordering(90) 00:11:10.990 fused_ordering(91) 00:11:10.990 fused_ordering(92) 00:11:10.990 fused_ordering(93) 00:11:10.990 fused_ordering(94) 00:11:10.990 fused_ordering(95) 00:11:10.990 fused_ordering(96) 00:11:10.990 fused_ordering(97) 00:11:10.990 fused_ordering(98) 00:11:10.990 fused_ordering(99) 00:11:10.990 fused_ordering(100) 00:11:10.990 fused_ordering(101) 00:11:10.990 fused_ordering(102) 00:11:10.990 fused_ordering(103) 00:11:10.990 fused_ordering(104) 00:11:10.990 fused_ordering(105) 00:11:10.990 fused_ordering(106) 00:11:10.990 fused_ordering(107) 00:11:10.990 fused_ordering(108) 00:11:10.990 fused_ordering(109) 00:11:10.990 fused_ordering(110) 00:11:10.990 fused_ordering(111) 00:11:10.990 fused_ordering(112) 00:11:10.990 fused_ordering(113) 00:11:10.990 fused_ordering(114) 00:11:10.990 fused_ordering(115) 00:11:10.990 fused_ordering(116) 00:11:10.990 fused_ordering(117) 00:11:10.990 fused_ordering(118) 00:11:10.990 fused_ordering(119) 00:11:10.990 fused_ordering(120) 00:11:10.990 fused_ordering(121) 00:11:10.990 fused_ordering(122) 00:11:10.990 fused_ordering(123) 00:11:10.990 fused_ordering(124) 00:11:10.990 fused_ordering(125) 00:11:10.990 fused_ordering(126) 00:11:10.990 fused_ordering(127) 00:11:10.990 fused_ordering(128) 00:11:10.990 fused_ordering(129) 00:11:10.990 fused_ordering(130) 00:11:10.990 fused_ordering(131) 00:11:10.990 fused_ordering(132) 00:11:10.990 fused_ordering(133) 00:11:10.990 fused_ordering(134) 00:11:10.990 fused_ordering(135) 00:11:10.990 fused_ordering(136) 00:11:10.990 fused_ordering(137) 00:11:10.990 fused_ordering(138) 00:11:10.990 fused_ordering(139) 00:11:10.990 fused_ordering(140) 00:11:10.990 fused_ordering(141) 00:11:10.990 fused_ordering(142) 00:11:10.990 fused_ordering(143) 00:11:10.990 fused_ordering(144) 00:11:10.990 fused_ordering(145) 00:11:10.990 fused_ordering(146) 00:11:10.991 fused_ordering(147) 00:11:10.991 fused_ordering(148) 00:11:10.991 fused_ordering(149) 00:11:10.991 fused_ordering(150) 00:11:10.991 fused_ordering(151) 00:11:10.991 fused_ordering(152) 00:11:10.991 fused_ordering(153) 00:11:10.991 fused_ordering(154) 00:11:10.991 fused_ordering(155) 00:11:10.991 fused_ordering(156) 00:11:10.991 fused_ordering(157) 00:11:10.991 fused_ordering(158) 00:11:10.991 fused_ordering(159) 00:11:10.991 fused_ordering(160) 00:11:10.991 fused_ordering(161) 00:11:10.991 fused_ordering(162) 00:11:10.991 fused_ordering(163) 00:11:10.991 fused_ordering(164) 00:11:10.991 fused_ordering(165) 00:11:10.991 fused_ordering(166) 00:11:10.991 fused_ordering(167) 00:11:10.991 fused_ordering(168) 00:11:10.991 fused_ordering(169) 00:11:10.991 fused_ordering(170) 00:11:10.991 fused_ordering(171) 00:11:10.991 fused_ordering(172) 00:11:10.991 fused_ordering(173) 00:11:10.991 fused_ordering(174) 00:11:10.991 fused_ordering(175) 00:11:10.991 fused_ordering(176) 00:11:10.991 fused_ordering(177) 00:11:10.991 fused_ordering(178) 00:11:10.991 fused_ordering(179) 00:11:10.991 fused_ordering(180) 00:11:10.991 fused_ordering(181) 00:11:10.991 fused_ordering(182) 00:11:10.991 fused_ordering(183) 00:11:10.991 fused_ordering(184) 00:11:10.991 fused_ordering(185) 00:11:10.991 fused_ordering(186) 00:11:10.991 fused_ordering(187) 00:11:10.991 fused_ordering(188) 00:11:10.991 fused_ordering(189) 00:11:10.991 fused_ordering(190) 00:11:10.991 fused_ordering(191) 00:11:10.991 fused_ordering(192) 00:11:10.991 fused_ordering(193) 00:11:10.991 fused_ordering(194) 00:11:10.991 fused_ordering(195) 00:11:10.991 fused_ordering(196) 00:11:10.991 fused_ordering(197) 00:11:10.991 fused_ordering(198) 00:11:10.991 fused_ordering(199) 00:11:10.991 fused_ordering(200) 00:11:10.991 fused_ordering(201) 00:11:10.991 fused_ordering(202) 00:11:10.991 fused_ordering(203) 00:11:10.991 fused_ordering(204) 00:11:10.991 fused_ordering(205) 00:11:11.928 fused_ordering(206) 00:11:11.928 fused_ordering(207) 00:11:11.928 fused_ordering(208) 00:11:11.928 fused_ordering(209) 00:11:11.928 fused_ordering(210) 00:11:11.928 fused_ordering(211) 00:11:11.928 fused_ordering(212) 00:11:11.928 fused_ordering(213) 00:11:11.928 fused_ordering(214) 00:11:11.928 fused_ordering(215) 00:11:11.928 fused_ordering(216) 00:11:11.928 fused_ordering(217) 00:11:11.928 fused_ordering(218) 00:11:11.928 fused_ordering(219) 00:11:11.928 fused_ordering(220) 00:11:11.928 fused_ordering(221) 00:11:11.928 fused_ordering(222) 00:11:11.928 fused_ordering(223) 00:11:11.928 fused_ordering(224) 00:11:11.928 fused_ordering(225) 00:11:11.928 fused_ordering(226) 00:11:11.928 fused_ordering(227) 00:11:11.928 fused_ordering(228) 00:11:11.928 fused_ordering(229) 00:11:11.928 fused_ordering(230) 00:11:11.928 fused_ordering(231) 00:11:11.928 fused_ordering(232) 00:11:11.928 fused_ordering(233) 00:11:11.928 fused_ordering(234) 00:11:11.928 fused_ordering(235) 00:11:11.928 fused_ordering(236) 00:11:11.928 fused_ordering(237) 00:11:11.928 fused_ordering(238) 00:11:11.928 fused_ordering(239) 00:11:11.928 fused_ordering(240) 00:11:11.928 fused_ordering(241) 00:11:11.928 fused_ordering(242) 00:11:11.928 fused_ordering(243) 00:11:11.928 fused_ordering(244) 00:11:11.928 fused_ordering(245) 00:11:11.928 fused_ordering(246) 00:11:11.928 fused_ordering(247) 00:11:11.928 fused_ordering(248) 00:11:11.928 fused_ordering(249) 00:11:11.928 fused_ordering(250) 00:11:11.928 fused_ordering(251) 00:11:11.928 fused_ordering(252) 00:11:11.928 fused_ordering(253) 00:11:11.928 fused_ordering(254) 00:11:11.928 fused_ordering(255) 00:11:11.928 fused_ordering(256) 00:11:11.928 fused_ordering(257) 00:11:11.928 fused_ordering(258) 00:11:11.928 fused_ordering(259) 00:11:11.928 fused_ordering(260) 00:11:11.928 fused_ordering(261) 00:11:11.928 fused_ordering(262) 00:11:11.928 fused_ordering(263) 00:11:11.928 fused_ordering(264) 00:11:11.928 fused_ordering(265) 00:11:11.928 fused_ordering(266) 00:11:11.928 fused_ordering(267) 00:11:11.928 fused_ordering(268) 00:11:11.928 fused_ordering(269) 00:11:11.928 fused_ordering(270) 00:11:11.928 fused_ordering(271) 00:11:11.928 fused_ordering(272) 00:11:11.928 fused_ordering(273) 00:11:11.928 fused_ordering(274) 00:11:11.928 fused_ordering(275) 00:11:11.928 fused_ordering(276) 00:11:11.928 fused_ordering(277) 00:11:11.928 fused_ordering(278) 00:11:11.928 fused_ordering(279) 00:11:11.928 fused_ordering(280) 00:11:11.928 fused_ordering(281) 00:11:11.928 fused_ordering(282) 00:11:11.928 fused_ordering(283) 00:11:11.928 fused_ordering(284) 00:11:11.928 fused_ordering(285) 00:11:11.928 fused_ordering(286) 00:11:11.928 fused_ordering(287) 00:11:11.928 fused_ordering(288) 00:11:11.928 fused_ordering(289) 00:11:11.928 fused_ordering(290) 00:11:11.928 fused_ordering(291) 00:11:11.928 fused_ordering(292) 00:11:11.928 fused_ordering(293) 00:11:11.928 fused_ordering(294) 00:11:11.928 fused_ordering(295) 00:11:11.928 fused_ordering(296) 00:11:11.928 fused_ordering(297) 00:11:11.928 fused_ordering(298) 00:11:11.928 fused_ordering(299) 00:11:11.928 fused_ordering(300) 00:11:11.928 fused_ordering(301) 00:11:11.928 fused_ordering(302) 00:11:11.928 fused_ordering(303) 00:11:11.928 fused_ordering(304) 00:11:11.928 fused_ordering(305) 00:11:11.928 fused_ordering(306) 00:11:11.928 fused_ordering(307) 00:11:11.928 fused_ordering(308) 00:11:11.928 fused_ordering(309) 00:11:11.928 fused_ordering(310) 00:11:11.928 fused_ordering(311) 00:11:11.928 fused_ordering(312) 00:11:11.928 fused_ordering(313) 00:11:11.928 fused_ordering(314) 00:11:11.928 fused_ordering(315) 00:11:11.928 fused_ordering(316) 00:11:11.928 fused_ordering(317) 00:11:11.928 fused_ordering(318) 00:11:11.928 fused_ordering(319) 00:11:11.928 fused_ordering(320) 00:11:11.928 fused_ordering(321) 00:11:11.928 fused_ordering(322) 00:11:11.928 fused_ordering(323) 00:11:11.928 fused_ordering(324) 00:11:11.928 fused_ordering(325) 00:11:11.928 fused_ordering(326) 00:11:11.928 fused_ordering(327) 00:11:11.928 fused_ordering(328) 00:11:11.928 fused_ordering(329) 00:11:11.928 fused_ordering(330) 00:11:11.928 fused_ordering(331) 00:11:11.928 fused_ordering(332) 00:11:11.928 fused_ordering(333) 00:11:11.928 fused_ordering(334) 00:11:11.928 fused_ordering(335) 00:11:11.928 fused_ordering(336) 00:11:11.928 fused_ordering(337) 00:11:11.928 fused_ordering(338) 00:11:11.928 fused_ordering(339) 00:11:11.928 fused_ordering(340) 00:11:11.928 fused_ordering(341) 00:11:11.928 fused_ordering(342) 00:11:11.928 fused_ordering(343) 00:11:11.928 fused_ordering(344) 00:11:11.928 fused_ordering(345) 00:11:11.928 fused_ordering(346) 00:11:11.928 fused_ordering(347) 00:11:11.928 fused_ordering(348) 00:11:11.928 fused_ordering(349) 00:11:11.928 fused_ordering(350) 00:11:11.928 fused_ordering(351) 00:11:11.928 fused_ordering(352) 00:11:11.928 fused_ordering(353) 00:11:11.928 fused_ordering(354) 00:11:11.928 fused_ordering(355) 00:11:11.928 fused_ordering(356) 00:11:11.928 fused_ordering(357) 00:11:11.928 fused_ordering(358) 00:11:11.928 fused_ordering(359) 00:11:11.928 fused_ordering(360) 00:11:11.928 fused_ordering(361) 00:11:11.928 fused_ordering(362) 00:11:11.928 fused_ordering(363) 00:11:11.928 fused_ordering(364) 00:11:11.928 fused_ordering(365) 00:11:11.928 fused_ordering(366) 00:11:11.928 fused_ordering(367) 00:11:11.928 fused_ordering(368) 00:11:11.928 fused_ordering(369) 00:11:11.928 fused_ordering(370) 00:11:11.928 fused_ordering(371) 00:11:11.928 fused_ordering(372) 00:11:11.928 fused_ordering(373) 00:11:11.928 fused_ordering(374) 00:11:11.928 fused_ordering(375) 00:11:11.928 fused_ordering(376) 00:11:11.928 fused_ordering(377) 00:11:11.928 fused_ordering(378) 00:11:11.928 fused_ordering(379) 00:11:11.928 fused_ordering(380) 00:11:11.928 fused_ordering(381) 00:11:11.928 fused_ordering(382) 00:11:11.928 fused_ordering(383) 00:11:11.928 fused_ordering(384) 00:11:11.928 fused_ordering(385) 00:11:11.928 fused_ordering(386) 00:11:11.928 fused_ordering(387) 00:11:11.928 fused_ordering(388) 00:11:11.928 fused_ordering(389) 00:11:11.928 fused_ordering(390) 00:11:11.928 fused_ordering(391) 00:11:11.928 fused_ordering(392) 00:11:11.928 fused_ordering(393) 00:11:11.928 fused_ordering(394) 00:11:11.928 fused_ordering(395) 00:11:11.928 fused_ordering(396) 00:11:11.928 fused_ordering(397) 00:11:11.928 fused_ordering(398) 00:11:11.928 fused_ordering(399) 00:11:11.928 fused_ordering(400) 00:11:11.928 fused_ordering(401) 00:11:11.928 fused_ordering(402) 00:11:11.928 fused_ordering(403) 00:11:11.928 fused_ordering(404) 00:11:11.928 fused_ordering(405) 00:11:11.928 fused_ordering(406) 00:11:11.928 fused_ordering(407) 00:11:11.928 fused_ordering(408) 00:11:11.928 fused_ordering(409) 00:11:11.928 fused_ordering(410) 00:11:12.496 fused_ordering(411) 00:11:12.496 fused_ordering(412) 00:11:12.496 fused_ordering(413) 00:11:12.496 fused_ordering(414) 00:11:12.496 fused_ordering(415) 00:11:12.496 fused_ordering(416) 00:11:12.496 fused_ordering(417) 00:11:12.496 fused_ordering(418) 00:11:12.496 fused_ordering(419) 00:11:12.496 fused_ordering(420) 00:11:12.496 fused_ordering(421) 00:11:12.496 fused_ordering(422) 00:11:12.496 fused_ordering(423) 00:11:12.496 fused_ordering(424) 00:11:12.496 fused_ordering(425) 00:11:12.496 fused_ordering(426) 00:11:12.496 fused_ordering(427) 00:11:12.497 fused_ordering(428) 00:11:12.497 fused_ordering(429) 00:11:12.497 fused_ordering(430) 00:11:12.497 fused_ordering(431) 00:11:12.497 fused_ordering(432) 00:11:12.497 fused_ordering(433) 00:11:12.497 fused_ordering(434) 00:11:12.497 fused_ordering(435) 00:11:12.497 fused_ordering(436) 00:11:12.497 fused_ordering(437) 00:11:12.497 fused_ordering(438) 00:11:12.497 fused_ordering(439) 00:11:12.497 fused_ordering(440) 00:11:12.497 fused_ordering(441) 00:11:12.497 fused_ordering(442) 00:11:12.497 fused_ordering(443) 00:11:12.497 fused_ordering(444) 00:11:12.497 fused_ordering(445) 00:11:12.497 fused_ordering(446) 00:11:12.497 fused_ordering(447) 00:11:12.497 fused_ordering(448) 00:11:12.497 fused_ordering(449) 00:11:12.497 fused_ordering(450) 00:11:12.497 fused_ordering(451) 00:11:12.497 fused_ordering(452) 00:11:12.497 fused_ordering(453) 00:11:12.497 fused_ordering(454) 00:11:12.497 fused_ordering(455) 00:11:12.497 fused_ordering(456) 00:11:12.497 fused_ordering(457) 00:11:12.497 fused_ordering(458) 00:11:12.497 fused_ordering(459) 00:11:12.497 fused_ordering(460) 00:11:12.497 fused_ordering(461) 00:11:12.497 fused_ordering(462) 00:11:12.497 fused_ordering(463) 00:11:12.497 fused_ordering(464) 00:11:12.497 fused_ordering(465) 00:11:12.497 fused_ordering(466) 00:11:12.497 fused_ordering(467) 00:11:12.497 fused_ordering(468) 00:11:12.497 fused_ordering(469) 00:11:12.497 fused_ordering(470) 00:11:12.497 fused_ordering(471) 00:11:12.497 fused_ordering(472) 00:11:12.497 fused_ordering(473) 00:11:12.497 fused_ordering(474) 00:11:12.497 fused_ordering(475) 00:11:12.497 fused_ordering(476) 00:11:12.497 fused_ordering(477) 00:11:12.497 fused_ordering(478) 00:11:12.497 fused_ordering(479) 00:11:12.497 fused_ordering(480) 00:11:12.497 fused_ordering(481) 00:11:12.497 fused_ordering(482) 00:11:12.497 fused_ordering(483) 00:11:12.497 fused_ordering(484) 00:11:12.497 fused_ordering(485) 00:11:12.497 fused_ordering(486) 00:11:12.497 fused_ordering(487) 00:11:12.497 fused_ordering(488) 00:11:12.497 fused_ordering(489) 00:11:12.497 fused_ordering(490) 00:11:12.497 fused_ordering(491) 00:11:12.497 fused_ordering(492) 00:11:12.497 fused_ordering(493) 00:11:12.497 fused_ordering(494) 00:11:12.497 fused_ordering(495) 00:11:12.497 fused_ordering(496) 00:11:12.497 fused_ordering(497) 00:11:12.497 fused_ordering(498) 00:11:12.497 fused_ordering(499) 00:11:12.497 fused_ordering(500) 00:11:12.497 fused_ordering(501) 00:11:12.497 fused_ordering(502) 00:11:12.497 fused_ordering(503) 00:11:12.497 fused_ordering(504) 00:11:12.497 fused_ordering(505) 00:11:12.497 fused_ordering(506) 00:11:12.497 fused_ordering(507) 00:11:12.497 fused_ordering(508) 00:11:12.497 fused_ordering(509) 00:11:12.497 fused_ordering(510) 00:11:12.497 fused_ordering(511) 00:11:12.497 fused_ordering(512) 00:11:12.497 fused_ordering(513) 00:11:12.497 fused_ordering(514) 00:11:12.497 fused_ordering(515) 00:11:12.497 fused_ordering(516) 00:11:12.497 fused_ordering(517) 00:11:12.497 fused_ordering(518) 00:11:12.497 fused_ordering(519) 00:11:12.497 fused_ordering(520) 00:11:12.497 fused_ordering(521) 00:11:12.497 fused_ordering(522) 00:11:12.497 fused_ordering(523) 00:11:12.497 fused_ordering(524) 00:11:12.497 fused_ordering(525) 00:11:12.497 fused_ordering(526) 00:11:12.497 fused_ordering(527) 00:11:12.497 fused_ordering(528) 00:11:12.497 fused_ordering(529) 00:11:12.497 fused_ordering(530) 00:11:12.497 fused_ordering(531) 00:11:12.497 fused_ordering(532) 00:11:12.497 fused_ordering(533) 00:11:12.497 fused_ordering(534) 00:11:12.497 fused_ordering(535) 00:11:12.497 fused_ordering(536) 00:11:12.497 fused_ordering(537) 00:11:12.497 fused_ordering(538) 00:11:12.497 fused_ordering(539) 00:11:12.497 fused_ordering(540) 00:11:12.497 fused_ordering(541) 00:11:12.497 fused_ordering(542) 00:11:12.497 fused_ordering(543) 00:11:12.497 fused_ordering(544) 00:11:12.497 fused_ordering(545) 00:11:12.497 fused_ordering(546) 00:11:12.497 fused_ordering(547) 00:11:12.497 fused_ordering(548) 00:11:12.497 fused_ordering(549) 00:11:12.497 fused_ordering(550) 00:11:12.497 fused_ordering(551) 00:11:12.497 fused_ordering(552) 00:11:12.497 fused_ordering(553) 00:11:12.497 fused_ordering(554) 00:11:12.497 fused_ordering(555) 00:11:12.497 fused_ordering(556) 00:11:12.497 fused_ordering(557) 00:11:12.497 fused_ordering(558) 00:11:12.497 fused_ordering(559) 00:11:12.497 fused_ordering(560) 00:11:12.497 fused_ordering(561) 00:11:12.497 fused_ordering(562) 00:11:12.497 fused_ordering(563) 00:11:12.497 fused_ordering(564) 00:11:12.497 fused_ordering(565) 00:11:12.497 fused_ordering(566) 00:11:12.497 fused_ordering(567) 00:11:12.497 fused_ordering(568) 00:11:12.497 fused_ordering(569) 00:11:12.497 fused_ordering(570) 00:11:12.497 fused_ordering(571) 00:11:12.497 fused_ordering(572) 00:11:12.497 fused_ordering(573) 00:11:12.497 fused_ordering(574) 00:11:12.497 fused_ordering(575) 00:11:12.497 fused_ordering(576) 00:11:12.497 fused_ordering(577) 00:11:12.497 fused_ordering(578) 00:11:12.497 fused_ordering(579) 00:11:12.497 fused_ordering(580) 00:11:12.497 fused_ordering(581) 00:11:12.497 fused_ordering(582) 00:11:12.497 fused_ordering(583) 00:11:12.497 fused_ordering(584) 00:11:12.497 fused_ordering(585) 00:11:12.497 fused_ordering(586) 00:11:12.497 fused_ordering(587) 00:11:12.497 fused_ordering(588) 00:11:12.497 fused_ordering(589) 00:11:12.497 fused_ordering(590) 00:11:12.497 fused_ordering(591) 00:11:12.497 fused_ordering(592) 00:11:12.497 fused_ordering(593) 00:11:12.497 fused_ordering(594) 00:11:12.497 fused_ordering(595) 00:11:12.497 fused_ordering(596) 00:11:12.497 fused_ordering(597) 00:11:12.497 fused_ordering(598) 00:11:12.497 fused_ordering(599) 00:11:12.497 fused_ordering(600) 00:11:12.497 fused_ordering(601) 00:11:12.497 fused_ordering(602) 00:11:12.497 fused_ordering(603) 00:11:12.497 fused_ordering(604) 00:11:12.497 fused_ordering(605) 00:11:12.497 fused_ordering(606) 00:11:12.497 fused_ordering(607) 00:11:12.497 fused_ordering(608) 00:11:12.497 fused_ordering(609) 00:11:12.497 fused_ordering(610) 00:11:12.497 fused_ordering(611) 00:11:12.497 fused_ordering(612) 00:11:12.497 fused_ordering(613) 00:11:12.497 fused_ordering(614) 00:11:12.497 fused_ordering(615) 00:11:13.435 fused_ordering(616) 00:11:13.435 fused_ordering(617) 00:11:13.435 fused_ordering(618) 00:11:13.435 fused_ordering(619) 00:11:13.435 fused_ordering(620) 00:11:13.435 fused_ordering(621) 00:11:13.435 fused_ordering(622) 00:11:13.435 fused_ordering(623) 00:11:13.435 fused_ordering(624) 00:11:13.435 fused_ordering(625) 00:11:13.435 fused_ordering(626) 00:11:13.435 fused_ordering(627) 00:11:13.435 fused_ordering(628) 00:11:13.435 fused_ordering(629) 00:11:13.435 fused_ordering(630) 00:11:13.435 fused_ordering(631) 00:11:13.435 fused_ordering(632) 00:11:13.435 fused_ordering(633) 00:11:13.435 fused_ordering(634) 00:11:13.435 fused_ordering(635) 00:11:13.435 fused_ordering(636) 00:11:13.435 fused_ordering(637) 00:11:13.435 fused_ordering(638) 00:11:13.435 fused_ordering(639) 00:11:13.435 fused_ordering(640) 00:11:13.435 fused_ordering(641) 00:11:13.435 fused_ordering(642) 00:11:13.435 fused_ordering(643) 00:11:13.435 fused_ordering(644) 00:11:13.435 fused_ordering(645) 00:11:13.435 fused_ordering(646) 00:11:13.435 fused_ordering(647) 00:11:13.435 fused_ordering(648) 00:11:13.435 fused_ordering(649) 00:11:13.435 fused_ordering(650) 00:11:13.435 fused_ordering(651) 00:11:13.435 fused_ordering(652) 00:11:13.435 fused_ordering(653) 00:11:13.435 fused_ordering(654) 00:11:13.435 fused_ordering(655) 00:11:13.435 fused_ordering(656) 00:11:13.435 fused_ordering(657) 00:11:13.435 fused_ordering(658) 00:11:13.435 fused_ordering(659) 00:11:13.435 fused_ordering(660) 00:11:13.435 fused_ordering(661) 00:11:13.435 fused_ordering(662) 00:11:13.435 fused_ordering(663) 00:11:13.435 fused_ordering(664) 00:11:13.435 fused_ordering(665) 00:11:13.435 fused_ordering(666) 00:11:13.435 fused_ordering(667) 00:11:13.435 fused_ordering(668) 00:11:13.435 fused_ordering(669) 00:11:13.435 fused_ordering(670) 00:11:13.435 fused_ordering(671) 00:11:13.435 fused_ordering(672) 00:11:13.435 fused_ordering(673) 00:11:13.435 fused_ordering(674) 00:11:13.435 fused_ordering(675) 00:11:13.435 fused_ordering(676) 00:11:13.435 fused_ordering(677) 00:11:13.435 fused_ordering(678) 00:11:13.435 fused_ordering(679) 00:11:13.435 fused_ordering(680) 00:11:13.435 fused_ordering(681) 00:11:13.435 fused_ordering(682) 00:11:13.435 fused_ordering(683) 00:11:13.435 fused_ordering(684) 00:11:13.435 fused_ordering(685) 00:11:13.435 fused_ordering(686) 00:11:13.435 fused_ordering(687) 00:11:13.435 fused_ordering(688) 00:11:13.435 fused_ordering(689) 00:11:13.435 fused_ordering(690) 00:11:13.435 fused_ordering(691) 00:11:13.435 fused_ordering(692) 00:11:13.435 fused_ordering(693) 00:11:13.435 fused_ordering(694) 00:11:13.435 fused_ordering(695) 00:11:13.435 fused_ordering(696) 00:11:13.435 fused_ordering(697) 00:11:13.435 fused_ordering(698) 00:11:13.435 fused_ordering(699) 00:11:13.435 fused_ordering(700) 00:11:13.435 fused_ordering(701) 00:11:13.435 fused_ordering(702) 00:11:13.435 fused_ordering(703) 00:11:13.435 fused_ordering(704) 00:11:13.435 fused_ordering(705) 00:11:13.435 fused_ordering(706) 00:11:13.435 fused_ordering(707) 00:11:13.435 fused_ordering(708) 00:11:13.435 fused_ordering(709) 00:11:13.435 fused_ordering(710) 00:11:13.435 fused_ordering(711) 00:11:13.435 fused_ordering(712) 00:11:13.435 fused_ordering(713) 00:11:13.435 fused_ordering(714) 00:11:13.435 fused_ordering(715) 00:11:13.435 fused_ordering(716) 00:11:13.435 fused_ordering(717) 00:11:13.435 fused_ordering(718) 00:11:13.435 fused_ordering(719) 00:11:13.435 fused_ordering(720) 00:11:13.435 fused_ordering(721) 00:11:13.435 fused_ordering(722) 00:11:13.435 fused_ordering(723) 00:11:13.435 fused_ordering(724) 00:11:13.435 fused_ordering(725) 00:11:13.435 fused_ordering(726) 00:11:13.435 fused_ordering(727) 00:11:13.435 fused_ordering(728) 00:11:13.435 fused_ordering(729) 00:11:13.435 fused_ordering(730) 00:11:13.435 fused_ordering(731) 00:11:13.435 fused_ordering(732) 00:11:13.435 fused_ordering(733) 00:11:13.435 fused_ordering(734) 00:11:13.435 fused_ordering(735) 00:11:13.435 fused_ordering(736) 00:11:13.435 fused_ordering(737) 00:11:13.435 fused_ordering(738) 00:11:13.435 fused_ordering(739) 00:11:13.435 fused_ordering(740) 00:11:13.435 fused_ordering(741) 00:11:13.435 fused_ordering(742) 00:11:13.435 fused_ordering(743) 00:11:13.435 fused_ordering(744) 00:11:13.435 fused_ordering(745) 00:11:13.435 fused_ordering(746) 00:11:13.435 fused_ordering(747) 00:11:13.435 fused_ordering(748) 00:11:13.435 fused_ordering(749) 00:11:13.435 fused_ordering(750) 00:11:13.435 fused_ordering(751) 00:11:13.435 fused_ordering(752) 00:11:13.435 fused_ordering(753) 00:11:13.435 fused_ordering(754) 00:11:13.435 fused_ordering(755) 00:11:13.435 fused_ordering(756) 00:11:13.435 fused_ordering(757) 00:11:13.435 fused_ordering(758) 00:11:13.435 fused_ordering(759) 00:11:13.435 fused_ordering(760) 00:11:13.435 fused_ordering(761) 00:11:13.435 fused_ordering(762) 00:11:13.435 fused_ordering(763) 00:11:13.435 fused_ordering(764) 00:11:13.435 fused_ordering(765) 00:11:13.435 fused_ordering(766) 00:11:13.435 fused_ordering(767) 00:11:13.435 fused_ordering(768) 00:11:13.435 fused_ordering(769) 00:11:13.435 fused_ordering(770) 00:11:13.435 fused_ordering(771) 00:11:13.435 fused_ordering(772) 00:11:13.435 fused_ordering(773) 00:11:13.435 fused_ordering(774) 00:11:13.435 fused_ordering(775) 00:11:13.435 fused_ordering(776) 00:11:13.435 fused_ordering(777) 00:11:13.435 fused_ordering(778) 00:11:13.435 fused_ordering(779) 00:11:13.435 fused_ordering(780) 00:11:13.435 fused_ordering(781) 00:11:13.435 fused_ordering(782) 00:11:13.435 fused_ordering(783) 00:11:13.435 fused_ordering(784) 00:11:13.435 fused_ordering(785) 00:11:13.435 fused_ordering(786) 00:11:13.435 fused_ordering(787) 00:11:13.435 fused_ordering(788) 00:11:13.435 fused_ordering(789) 00:11:13.435 fused_ordering(790) 00:11:13.435 fused_ordering(791) 00:11:13.435 fused_ordering(792) 00:11:13.435 fused_ordering(793) 00:11:13.435 fused_ordering(794) 00:11:13.435 fused_ordering(795) 00:11:13.435 fused_ordering(796) 00:11:13.435 fused_ordering(797) 00:11:13.435 fused_ordering(798) 00:11:13.435 fused_ordering(799) 00:11:13.435 fused_ordering(800) 00:11:13.435 fused_ordering(801) 00:11:13.435 fused_ordering(802) 00:11:13.435 fused_ordering(803) 00:11:13.435 fused_ordering(804) 00:11:13.435 fused_ordering(805) 00:11:13.435 fused_ordering(806) 00:11:13.435 fused_ordering(807) 00:11:13.435 fused_ordering(808) 00:11:13.435 fused_ordering(809) 00:11:13.435 fused_ordering(810) 00:11:13.435 fused_ordering(811) 00:11:13.435 fused_ordering(812) 00:11:13.435 fused_ordering(813) 00:11:13.435 fused_ordering(814) 00:11:13.435 fused_ordering(815) 00:11:13.435 fused_ordering(816) 00:11:13.435 fused_ordering(817) 00:11:13.435 fused_ordering(818) 00:11:13.435 fused_ordering(819) 00:11:13.435 fused_ordering(820) 00:11:14.004 fused_ordering(821) 00:11:14.004 fused_ordering(822) 00:11:14.004 fused_ordering(823) 00:11:14.004 fused_ordering(824) 00:11:14.004 fused_ordering(825) 00:11:14.004 fused_ordering(826) 00:11:14.004 fused_ordering(827) 00:11:14.004 fused_ordering(828) 00:11:14.004 fused_ordering(829) 00:11:14.004 fused_ordering(830) 00:11:14.004 fused_ordering(831) 00:11:14.004 fused_ordering(832) 00:11:14.004 fused_ordering(833) 00:11:14.004 fused_ordering(834) 00:11:14.004 fused_ordering(835) 00:11:14.004 fused_ordering(836) 00:11:14.004 fused_ordering(837) 00:11:14.004 fused_ordering(838) 00:11:14.004 fused_ordering(839) 00:11:14.004 fused_ordering(840) 00:11:14.004 fused_ordering(841) 00:11:14.004 fused_ordering(842) 00:11:14.004 fused_ordering(843) 00:11:14.004 fused_ordering(844) 00:11:14.004 fused_ordering(845) 00:11:14.004 fused_ordering(846) 00:11:14.004 fused_ordering(847) 00:11:14.005 fused_ordering(848) 00:11:14.005 fused_ordering(849) 00:11:14.005 fused_ordering(850) 00:11:14.005 fused_ordering(851) 00:11:14.005 fused_ordering(852) 00:11:14.005 fused_ordering(853) 00:11:14.005 fused_ordering(854) 00:11:14.005 fused_ordering(855) 00:11:14.005 fused_ordering(856) 00:11:14.005 fused_ordering(857) 00:11:14.005 fused_ordering(858) 00:11:14.005 fused_ordering(859) 00:11:14.005 fused_ordering(860) 00:11:14.005 fused_ordering(861) 00:11:14.005 fused_ordering(862) 00:11:14.005 fused_ordering(863) 00:11:14.005 fused_ordering(864) 00:11:14.005 fused_ordering(865) 00:11:14.005 fused_ordering(866) 00:11:14.005 fused_ordering(867) 00:11:14.005 fused_ordering(868) 00:11:14.005 fused_ordering(869) 00:11:14.005 fused_ordering(870) 00:11:14.005 fused_ordering(871) 00:11:14.005 fused_ordering(872) 00:11:14.005 fused_ordering(873) 00:11:14.005 fused_ordering(874) 00:11:14.005 fused_ordering(875) 00:11:14.005 fused_ordering(876) 00:11:14.005 fused_ordering(877) 00:11:14.005 fused_ordering(878) 00:11:14.005 fused_ordering(879) 00:11:14.005 fused_ordering(880) 00:11:14.005 fused_ordering(881) 00:11:14.005 fused_ordering(882) 00:11:14.005 fused_ordering(883) 00:11:14.005 fused_ordering(884) 00:11:14.005 fused_ordering(885) 00:11:14.005 fused_ordering(886) 00:11:14.005 fused_ordering(887) 00:11:14.005 fused_ordering(888) 00:11:14.005 fused_ordering(889) 00:11:14.005 fused_ordering(890) 00:11:14.005 fused_ordering(891) 00:11:14.005 fused_ordering(892) 00:11:14.005 fused_ordering(893) 00:11:14.005 fused_ordering(894) 00:11:14.005 fused_ordering(895) 00:11:14.005 fused_ordering(896) 00:11:14.005 fused_ordering(897) 00:11:14.005 fused_ordering(898) 00:11:14.005 fused_ordering(899) 00:11:14.005 fused_ordering(900) 00:11:14.005 fused_ordering(901) 00:11:14.005 fused_ordering(902) 00:11:14.005 fused_ordering(903) 00:11:14.005 fused_ordering(904) 00:11:14.005 fused_ordering(905) 00:11:14.005 fused_ordering(906) 00:11:14.005 fused_ordering(907) 00:11:14.005 fused_ordering(908) 00:11:14.005 fused_ordering(909) 00:11:14.005 fused_ordering(910) 00:11:14.005 fused_ordering(911) 00:11:14.005 fused_ordering(912) 00:11:14.005 fused_ordering(913) 00:11:14.005 fused_ordering(914) 00:11:14.005 fused_ordering(915) 00:11:14.005 fused_ordering(916) 00:11:14.005 fused_ordering(917) 00:11:14.005 fused_ordering(918) 00:11:14.005 fused_ordering(919) 00:11:14.005 fused_ordering(920) 00:11:14.005 fused_ordering(921) 00:11:14.005 fused_ordering(922) 00:11:14.005 fused_ordering(923) 00:11:14.005 fused_ordering(924) 00:11:14.005 fused_ordering(925) 00:11:14.005 fused_ordering(926) 00:11:14.005 fused_ordering(927) 00:11:14.005 fused_ordering(928) 00:11:14.005 fused_ordering(929) 00:11:14.005 fused_ordering(930) 00:11:14.005 fused_ordering(931) 00:11:14.005 fused_ordering(932) 00:11:14.005 fused_ordering(933) 00:11:14.005 fused_ordering(934) 00:11:14.005 fused_ordering(935) 00:11:14.005 fused_ordering(936) 00:11:14.005 fused_ordering(937) 00:11:14.005 fused_ordering(938) 00:11:14.005 fused_ordering(939) 00:11:14.005 fused_ordering(940) 00:11:14.005 fused_ordering(941) 00:11:14.005 fused_ordering(942) 00:11:14.005 fused_ordering(943) 00:11:14.005 fused_ordering(944) 00:11:14.005 fused_ordering(945) 00:11:14.005 fused_ordering(946) 00:11:14.005 fused_ordering(947) 00:11:14.005 fused_ordering(948) 00:11:14.005 fused_ordering(949) 00:11:14.005 fused_ordering(950) 00:11:14.005 fused_ordering(951) 00:11:14.005 fused_ordering(952) 00:11:14.005 fused_ordering(953) 00:11:14.005 fused_ordering(954) 00:11:14.005 fused_ordering(955) 00:11:14.005 fused_ordering(956) 00:11:14.005 fused_ordering(957) 00:11:14.005 fused_ordering(958) 00:11:14.005 fused_ordering(959) 00:11:14.005 fused_ordering(960) 00:11:14.005 fused_ordering(961) 00:11:14.005 fused_ordering(962) 00:11:14.005 fused_ordering(963) 00:11:14.005 fused_ordering(964) 00:11:14.005 fused_ordering(965) 00:11:14.005 fused_ordering(966) 00:11:14.005 fused_ordering(967) 00:11:14.005 fused_ordering(968) 00:11:14.005 fused_ordering(969) 00:11:14.005 fused_ordering(970) 00:11:14.005 fused_ordering(971) 00:11:14.005 fused_ordering(972) 00:11:14.005 fused_ordering(973) 00:11:14.005 fused_ordering(974) 00:11:14.005 fused_ordering(975) 00:11:14.005 fused_ordering(976) 00:11:14.005 fused_ordering(977) 00:11:14.005 fused_ordering(978) 00:11:14.005 fused_ordering(979) 00:11:14.005 fused_ordering(980) 00:11:14.005 fused_ordering(981) 00:11:14.005 fused_ordering(982) 00:11:14.005 fused_ordering(983) 00:11:14.005 fused_ordering(984) 00:11:14.005 fused_ordering(985) 00:11:14.005 fused_ordering(986) 00:11:14.005 fused_ordering(987) 00:11:14.005 fused_ordering(988) 00:11:14.005 fused_ordering(989) 00:11:14.005 fused_ordering(990) 00:11:14.005 fused_ordering(991) 00:11:14.005 fused_ordering(992) 00:11:14.005 fused_ordering(993) 00:11:14.005 fused_ordering(994) 00:11:14.005 fused_ordering(995) 00:11:14.005 fused_ordering(996) 00:11:14.005 fused_ordering(997) 00:11:14.005 fused_ordering(998) 00:11:14.005 fused_ordering(999) 00:11:14.005 fused_ordering(1000) 00:11:14.005 fused_ordering(1001) 00:11:14.005 fused_ordering(1002) 00:11:14.005 fused_ordering(1003) 00:11:14.005 fused_ordering(1004) 00:11:14.005 fused_ordering(1005) 00:11:14.005 fused_ordering(1006) 00:11:14.005 fused_ordering(1007) 00:11:14.005 fused_ordering(1008) 00:11:14.005 fused_ordering(1009) 00:11:14.005 fused_ordering(1010) 00:11:14.005 fused_ordering(1011) 00:11:14.005 fused_ordering(1012) 00:11:14.005 fused_ordering(1013) 00:11:14.005 fused_ordering(1014) 00:11:14.005 fused_ordering(1015) 00:11:14.005 fused_ordering(1016) 00:11:14.005 fused_ordering(1017) 00:11:14.005 fused_ordering(1018) 00:11:14.005 fused_ordering(1019) 00:11:14.005 fused_ordering(1020) 00:11:14.005 fused_ordering(1021) 00:11:14.005 fused_ordering(1022) 00:11:14.005 fused_ordering(1023) 00:11:14.005 12:12:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:14.005 12:12:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:14.005 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.005 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:14.005 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.005 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.272 rmmod nvme_tcp 00:11:14.272 rmmod nvme_fabrics 00:11:14.272 rmmod nvme_keyring 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2032588 ']' 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2032588 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 2032588 ']' 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 2032588 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2032588 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2032588' 00:11:14.272 killing process with pid 2032588 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 2032588 00:11:14.272 [2024-05-15 12:12:42.649669] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:14.272 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 2032588 00:11:14.532 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:14.532 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:14.532 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:14.532 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.532 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.532 12:12:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.532 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.532 12:12:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.471 12:12:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.471 00:11:16.471 real 0m14.150s 00:11:16.471 user 0m8.138s 00:11:16.471 sys 0m8.184s 00:11:16.471 12:12:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:16.471 12:12:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:16.471 ************************************ 00:11:16.471 END TEST nvmf_fused_ordering 00:11:16.471 ************************************ 00:11:16.471 12:12:44 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:16.471 12:12:44 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:16.471 12:12:44 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:16.471 12:12:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:16.730 ************************************ 00:11:16.730 START TEST nvmf_delete_subsystem 00:11:16.730 ************************************ 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:16.730 * Looking for test storage... 00:11:16.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.730 12:12:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:23.330 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:23.330 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:23.330 Found net devices under 0000:af:00.0: cvl_0_0 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:23.330 Found net devices under 0000:af:00.1: cvl_0_1 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.330 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.331 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:11:23.589 00:11:23.589 --- 10.0.0.2 ping statistics --- 00:11:23.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.589 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:11:23.589 00:11:23.589 --- 10.0.0.1 ping statistics --- 00:11:23.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.589 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.589 12:12:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2037158 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2037158 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 2037158 ']' 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:23.589 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.589 [2024-05-15 12:12:52.077831] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:11:23.589 [2024-05-15 12:12:52.077879] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.589 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.847 [2024-05-15 12:12:52.151967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:23.847 [2024-05-15 12:12:52.225524] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.847 [2024-05-15 12:12:52.225566] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.847 [2024-05-15 12:12:52.225575] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.847 [2024-05-15 12:12:52.225584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.847 [2024-05-15 12:12:52.225608] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.847 [2024-05-15 12:12:52.225650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.847 [2024-05-15 12:12:52.225655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 [2024-05-15 12:12:52.922663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.413 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.413 [2024-05-15 12:12:52.942667] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:24.413 [2024-05-15 12:12:52.942909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.671 NULL1 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.671 Delay0 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.671 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.672 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.672 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2037268 00:11:24.672 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:24.672 12:12:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:24.672 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.672 [2024-05-15 12:12:53.023798] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:26.572 12:12:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.572 12:12:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.572 12:12:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 [2024-05-15 12:12:55.114246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe060000c00 is same with the state(5) to be set 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 starting I/O failed: -6 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Read completed with error (sct=0, sc=8) 00:11:26.831 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 starting I/O failed: -6 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 starting I/O failed: -6 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 starting I/O failed: -6 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 starting I/O failed: -6 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 starting I/O failed: -6 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 starting I/O failed: -6 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 starting I/O failed: -6 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 [2024-05-15 12:12:55.114981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead980 is same with the state(5) to be set 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:26.832 Read completed with error (sct=0, sc=8) 00:11:26.832 Write completed with error (sct=0, sc=8) 00:11:27.766 [2024-05-15 12:12:56.080606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb0420 is same with the state(5) to be set 00:11:27.766 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 [2024-05-15 12:12:56.116164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadb60 is same with the state(5) to be set 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 [2024-05-15 12:12:56.116434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeafe20 is same with the state(5) to be set 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 [2024-05-15 12:12:56.116554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe06000c2f0 is same with the state(5) to be set 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Write completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 Read completed with error (sct=0, sc=8) 00:11:27.767 [2024-05-15 12:12:56.116873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeafc40 is same with the state(5) to be set 00:11:27.767 Initializing NVMe Controllers 00:11:27.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:27.767 Controller IO queue size 128, less than required. 00:11:27.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:27.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:27.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:27.767 Initialization complete. Launching workers. 00:11:27.767 ======================================================== 00:11:27.767 Latency(us) 00:11:27.767 Device Information : IOPS MiB/s Average min max 00:11:27.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.09 0.09 945725.08 1215.53 1011997.87 00:11:27.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.37 0.08 878026.34 516.79 1012216.03 00:11:27.767 ======================================================== 00:11:27.767 Total : 349.45 0.17 915626.06 516.79 1012216.03 00:11:27.767 00:11:27.767 [2024-05-15 12:12:56.117492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0420 (9): Bad file descriptor 00:11:27.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:27.767 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.767 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:27.767 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2037268 00:11:27.767 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2037268 00:11:28.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2037268) - No such process 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2037268 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2037268 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2037268 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:11:28.332 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 [2024-05-15 12:12:56.643961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2037960 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2037960 00:11:28.333 12:12:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:28.333 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.333 [2024-05-15 12:12:56.715431] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:28.898 12:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:28.898 12:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2037960 00:11:28.898 12:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.156 12:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.156 12:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2037960 00:11:29.156 12:12:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.723 12:12:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.723 12:12:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2037960 00:11:29.723 12:12:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.289 12:12:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.289 12:12:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2037960 00:11:30.289 12:12:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.855 12:12:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.855 12:12:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2037960 00:11:30.855 12:12:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.420 12:12:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.420 12:12:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2037960 00:11:31.420 12:12:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.420 Initializing NVMe Controllers 00:11:31.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:31.420 Controller IO queue size 128, less than required. 00:11:31.420 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:31.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:31.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:31.420 Initialization complete. Launching workers. 00:11:31.420 ======================================================== 00:11:31.420 Latency(us) 00:11:31.420 Device Information : IOPS MiB/s Average min max 00:11:31.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003509.22 1000326.00 1041509.07 00:11:31.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004773.49 1000452.41 1012834.05 00:11:31.420 ======================================================== 00:11:31.420 Total : 256.00 0.12 1004141.36 1000326.00 1041509.07 00:11:31.420 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2037960 00:11:31.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2037960) - No such process 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2037960 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.678 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.936 rmmod nvme_tcp 00:11:31.936 rmmod nvme_fabrics 00:11:31.936 rmmod nvme_keyring 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2037158 ']' 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2037158 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 2037158 ']' 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 2037158 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2037158 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2037158' 00:11:31.936 killing process with pid 2037158 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 2037158 00:11:31.936 [2024-05-15 12:13:00.314551] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:31.936 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 2037158 00:11:32.196 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.196 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.196 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.196 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.196 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.196 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.196 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.196 12:13:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.099 12:13:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:34.099 00:11:34.099 real 0m17.571s 00:11:34.099 user 0m29.607s 00:11:34.099 sys 0m7.071s 00:11:34.099 12:13:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:34.099 12:13:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.099 ************************************ 00:11:34.099 END TEST nvmf_delete_subsystem 00:11:34.099 ************************************ 00:11:34.357 12:13:02 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:34.357 12:13:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:34.357 12:13:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:34.357 12:13:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.357 ************************************ 00:11:34.357 START TEST nvmf_ns_masking 00:11:34.357 ************************************ 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:34.357 * Looking for test storage... 00:11:34.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.357 12:13:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=6fb5be9e-8a1d-4778-915f-d41a92b45622 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.358 12:13:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:40.981 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:40.981 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:40.981 Found net devices under 0000:af:00.0: cvl_0_0 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:40.981 Found net devices under 0000:af:00.1: cvl_0_1 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:40.981 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:40.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:11:40.982 00:11:40.982 --- 10.0.0.2 ping statistics --- 00:11:40.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.982 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:11:40.982 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:11:41.241 00:11:41.241 --- 10.0.0.1 ping statistics --- 00:11:41.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.241 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2042301 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2042301 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 2042301 ']' 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:41.241 12:13:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:41.241 [2024-05-15 12:13:09.612390] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:11:41.241 [2024-05-15 12:13:09.612437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.241 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.241 [2024-05-15 12:13:09.685779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.241 [2024-05-15 12:13:09.759752] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.241 [2024-05-15 12:13:09.759793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.241 [2024-05-15 12:13:09.759803] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.241 [2024-05-15 12:13:09.759811] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.241 [2024-05-15 12:13:09.759834] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.241 [2024-05-15 12:13:09.759895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.241 [2024-05-15 12:13:09.759996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.241 [2024-05-15 12:13:09.760024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.241 [2024-05-15 12:13:09.760025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:42.177 [2024-05-15 12:13:10.614655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:42.177 12:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:42.436 Malloc1 00:11:42.436 12:13:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:42.694 Malloc2 00:11:42.694 12:13:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.694 12:13:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:42.953 12:13:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.211 [2024-05-15 12:13:11.537799] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:43.211 [2024-05-15 12:13:11.538085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.211 12:13:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:43.211 12:13:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fb5be9e-8a1d-4778-915f-d41a92b45622 -a 10.0.0.2 -s 4420 -i 4 00:11:43.211 12:13:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.211 12:13:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:11:43.211 12:13:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.211 12:13:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:43.211 12:13:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:45.748 [ 0]:0x1 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ad0783c88164863b3379a5b50c00b96 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ad0783c88164863b3379a5b50c00b96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:45.748 12:13:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:45.748 [ 0]:0x1 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ad0783c88164863b3379a5b50c00b96 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ad0783c88164863b3379a5b50c00b96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:45.748 [ 1]:0x2 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a1804ff3f5b4ad9aceba237ead8a7e2 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a1804ff3f5b4ad9aceba237ead8a7e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.748 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.007 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:46.267 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:46.267 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fb5be9e-8a1d-4778-915f-d41a92b45622 -a 10.0.0.2 -s 4420 -i 4 00:11:46.267 12:13:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:46.267 12:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:11:46.267 12:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.267 12:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:11:46.267 12:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:11:46.267 12:13:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:11:48.172 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:48.172 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:48.172 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.172 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:48.172 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.172 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:11:48.172 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:48.172 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:48.431 [ 0]:0x2 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a1804ff3f5b4ad9aceba237ead8a7e2 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a1804ff3f5b4ad9aceba237ead8a7e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.431 12:13:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:48.690 [ 0]:0x1 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ad0783c88164863b3379a5b50c00b96 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ad0783c88164863b3379a5b50c00b96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.690 [ 1]:0x2 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a1804ff3f5b4ad9aceba237ead8a7e2 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a1804ff3f5b4ad9aceba237ead8a7e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.690 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:48.949 [ 0]:0x2 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.949 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:49.208 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a1804ff3f5b4ad9aceba237ead8a7e2 00:11:49.208 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a1804ff3f5b4ad9aceba237ead8a7e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.208 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:49.208 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.208 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:49.208 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:49.208 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fb5be9e-8a1d-4778-915f-d41a92b45622 -a 10.0.0.2 -s 4420 -i 4 00:11:49.467 12:13:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:49.467 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:11:49.467 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.467 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:11:49.467 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:11:49.467 12:13:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:11:51.372 12:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:51.372 12:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:51.372 12:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.372 12:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:11:51.372 12:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.372 12:13:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:11:51.372 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:51.372 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.632 [ 0]:0x1 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ad0783c88164863b3379a5b50c00b96 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ad0783c88164863b3379a5b50c00b96 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.632 [ 1]:0x2 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.632 12:13:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.632 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a1804ff3f5b4ad9aceba237ead8a7e2 00:11:51.632 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a1804ff3f5b4ad9aceba237ead8a7e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.632 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.891 [ 0]:0x2 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a1804ff3f5b4ad9aceba237ead8a7e2 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a1804ff3f5b4ad9aceba237ead8a7e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:51.891 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:52.151 [2024-05-15 12:13:20.519066] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:52.151 request: 00:11:52.151 { 00:11:52.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.151 "nsid": 2, 00:11:52.151 "host": "nqn.2016-06.io.spdk:host1", 00:11:52.151 "method": "nvmf_ns_remove_host", 00:11:52.151 "req_id": 1 00:11:52.151 } 00:11:52.151 Got JSON-RPC error response 00:11:52.151 response: 00:11:52.151 { 00:11:52.151 "code": -32602, 00:11:52.151 "message": "Invalid parameters" 00:11:52.151 } 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:52.151 [ 0]:0x2 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:52.151 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:52.410 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5a1804ff3f5b4ad9aceba237ead8a7e2 00:11:52.410 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5a1804ff3f5b4ad9aceba237ead8a7e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.410 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:52.410 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.410 12:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:52.670 rmmod nvme_tcp 00:11:52.670 rmmod nvme_fabrics 00:11:52.670 rmmod nvme_keyring 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2042301 ']' 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2042301 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 2042301 ']' 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 2042301 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2042301 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2042301' 00:11:52.670 killing process with pid 2042301 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 2042301 00:11:52.670 [2024-05-15 12:13:21.150293] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:52.670 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 2042301 00:11:52.930 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.930 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:52.930 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:52.930 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.930 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:52.930 12:13:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.930 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.930 12:13:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.470 12:13:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.470 00:11:55.470 real 0m20.772s 00:11:55.470 user 0m50.138s 00:11:55.470 sys 0m7.498s 00:11:55.470 12:13:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:55.470 12:13:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.470 ************************************ 00:11:55.470 END TEST nvmf_ns_masking 00:11:55.470 ************************************ 00:11:55.470 12:13:23 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:55.470 12:13:23 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:55.470 12:13:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:55.470 12:13:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:55.470 12:13:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.470 ************************************ 00:11:55.470 START TEST nvmf_nvme_cli 00:11:55.471 ************************************ 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:55.471 * Looking for test storage... 00:11:55.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.471 12:13:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.077 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.077 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:02.077 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:02.077 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:02.077 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:02.077 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:02.077 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:02.078 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:02.078 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:02.078 Found net devices under 0000:af:00.0: cvl_0_0 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:02.078 Found net devices under 0000:af:00.1: cvl_0_1 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.078 12:13:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:02.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:12:02.078 00:12:02.078 --- 10.0.0.2 ping statistics --- 00:12:02.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.078 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:12:02.078 00:12:02.078 --- 10.0.0.1 ping statistics --- 00:12:02.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.078 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2048024 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2048024 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # '[' -z 2048024 ']' 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:02.078 12:13:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.079 12:13:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:02.079 12:13:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.079 [2024-05-15 12:13:30.379298] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:12:02.079 [2024-05-15 12:13:30.379343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.079 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.079 [2024-05-15 12:13:30.453650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.079 [2024-05-15 12:13:30.528468] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.079 [2024-05-15 12:13:30.528507] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.079 [2024-05-15 12:13:30.528520] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.079 [2024-05-15 12:13:30.528529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.079 [2024-05-15 12:13:30.528537] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.079 [2024-05-15 12:13:30.528581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.079 [2024-05-15 12:13:30.528697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.079 [2024-05-15 12:13:30.528726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.079 [2024-05-15 12:13:30.528727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@861 -- # return 0 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 [2024-05-15 12:13:31.264006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 Malloc0 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 Malloc1 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 [2024-05-15 12:13:31.348223] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:03.018 [2024-05-15 12:13:31.348478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:12:03.018 00:12:03.018 Discovery Log Number of Records 2, Generation counter 2 00:12:03.018 =====Discovery Log Entry 0====== 00:12:03.018 trtype: tcp 00:12:03.018 adrfam: ipv4 00:12:03.018 subtype: current discovery subsystem 00:12:03.018 treq: not required 00:12:03.018 portid: 0 00:12:03.018 trsvcid: 4420 00:12:03.018 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:03.018 traddr: 10.0.0.2 00:12:03.018 eflags: explicit discovery connections, duplicate discovery information 00:12:03.018 sectype: none 00:12:03.018 =====Discovery Log Entry 1====== 00:12:03.018 trtype: tcp 00:12:03.018 adrfam: ipv4 00:12:03.018 subtype: nvme subsystem 00:12:03.018 treq: not required 00:12:03.018 portid: 0 00:12:03.018 trsvcid: 4420 00:12:03.018 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:03.018 traddr: 10.0.0.2 00:12:03.018 eflags: none 00:12:03.018 sectype: none 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:03.018 12:13:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.398 12:13:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:04.398 12:13:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local i=0 00:12:04.398 12:13:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.398 12:13:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:12:04.398 12:13:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:12:04.398 12:13:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # sleep 2 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # return 0 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.302 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:06.561 /dev/nvme0n1 ]] 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.561 12:13:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:06.820 12:13:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # local i=0 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # return 0 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.080 rmmod nvme_tcp 00:12:07.080 rmmod nvme_fabrics 00:12:07.080 rmmod nvme_keyring 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2048024 ']' 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2048024 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' -z 2048024 ']' 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # kill -0 2048024 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # uname 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2048024 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2048024' 00:12:07.080 killing process with pid 2048024 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # kill 2048024 00:12:07.080 [2024-05-15 12:13:35.526790] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:07.080 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # wait 2048024 00:12:07.340 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.340 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.340 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.340 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.340 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.340 12:13:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.340 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.340 12:13:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.877 12:13:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:09.877 00:12:09.877 real 0m14.293s 00:12:09.877 user 0m22.595s 00:12:09.877 sys 0m5.807s 00:12:09.877 12:13:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:09.877 12:13:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:09.877 ************************************ 00:12:09.877 END TEST nvmf_nvme_cli 00:12:09.877 ************************************ 00:12:09.877 12:13:37 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:09.877 12:13:37 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:09.877 12:13:37 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:09.877 12:13:37 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:09.877 12:13:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.877 ************************************ 00:12:09.877 START TEST nvmf_vfio_user 00:12:09.877 ************************************ 00:12:09.877 12:13:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:09.877 * Looking for test storage... 00:12:09.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2049489 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2049489' 00:12:09.877 Process pid: 2049489 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2049489 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 2049489 ']' 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:09.877 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:09.877 [2024-05-15 12:13:38.131770] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:12:09.877 [2024-05-15 12:13:38.131821] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.877 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.877 [2024-05-15 12:13:38.200866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.877 [2024-05-15 12:13:38.275706] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.877 [2024-05-15 12:13:38.275743] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.878 [2024-05-15 12:13:38.275752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.878 [2024-05-15 12:13:38.275760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.878 [2024-05-15 12:13:38.275767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.878 [2024-05-15 12:13:38.275809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.878 [2024-05-15 12:13:38.275854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.878 [2024-05-15 12:13:38.275826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.878 [2024-05-15 12:13:38.275852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.444 12:13:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:10.444 12:13:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:12:10.444 12:13:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:11.814 12:13:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:11.814 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:11.814 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:11.814 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:11.814 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:11.814 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:11.814 Malloc1 00:12:12.072 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:12.072 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:12.329 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:12.587 [2024-05-15 12:13:40.878518] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:12.587 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:12.587 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:12.587 12:13:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:12.587 Malloc2 00:12:12.587 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:12.845 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:13.103 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:13.363 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:13.363 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:13.363 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:13.363 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:13.363 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:13.363 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:13.363 [2024-05-15 12:13:41.673644] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:12:13.363 [2024-05-15 12:13:41.673696] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050050 ] 00:12:13.363 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.363 [2024-05-15 12:13:41.705557] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:13.363 [2024-05-15 12:13:41.713726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:13.363 [2024-05-15 12:13:41.713748] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd44f59e000 00:12:13.363 [2024-05-15 12:13:41.714725] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.363 [2024-05-15 12:13:41.715732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.363 [2024-05-15 12:13:41.716735] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.363 [2024-05-15 12:13:41.717740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:13.363 [2024-05-15 12:13:41.718742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:13.363 [2024-05-15 12:13:41.719747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.363 [2024-05-15 12:13:41.720752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:13.363 [2024-05-15 12:13:41.721755] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.363 [2024-05-15 12:13:41.722763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:13.363 [2024-05-15 12:13:41.722778] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd44f593000 00:12:13.363 [2024-05-15 12:13:41.723673] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:13.363 [2024-05-15 12:13:41.732985] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:13.363 [2024-05-15 12:13:41.733014] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:13.363 [2024-05-15 12:13:41.737852] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:13.363 [2024-05-15 12:13:41.737890] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:13.363 [2024-05-15 12:13:41.737969] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:13.363 [2024-05-15 12:13:41.737987] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:13.363 [2024-05-15 12:13:41.737994] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:13.363 [2024-05-15 12:13:41.738848] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:13.363 [2024-05-15 12:13:41.738859] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:13.363 [2024-05-15 12:13:41.738868] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:13.363 [2024-05-15 12:13:41.739850] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:13.363 [2024-05-15 12:13:41.739863] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:13.363 [2024-05-15 12:13:41.739872] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:13.363 [2024-05-15 12:13:41.740853] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:13.363 [2024-05-15 12:13:41.740863] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:13.363 [2024-05-15 12:13:41.743200] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:13.363 [2024-05-15 12:13:41.743211] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:13.363 [2024-05-15 12:13:41.743218] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:13.363 [2024-05-15 12:13:41.743226] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:13.363 [2024-05-15 12:13:41.743333] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:13.363 [2024-05-15 12:13:41.743339] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:13.363 [2024-05-15 12:13:41.743345] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:13.363 [2024-05-15 12:13:41.743875] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:13.363 [2024-05-15 12:13:41.744876] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:13.363 [2024-05-15 12:13:41.745881] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:13.364 [2024-05-15 12:13:41.746877] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:13.364 [2024-05-15 12:13:41.746944] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:13.364 [2024-05-15 12:13:41.747890] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:13.364 [2024-05-15 12:13:41.747900] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:13.364 [2024-05-15 12:13:41.747906] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.747925] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:13.364 [2024-05-15 12:13:41.747934] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.747952] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:13.364 [2024-05-15 12:13:41.747959] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:13.364 [2024-05-15 12:13:41.747974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748034] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:13.364 [2024-05-15 12:13:41.748040] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:13.364 [2024-05-15 12:13:41.748046] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:13.364 [2024-05-15 12:13:41.748052] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:13.364 [2024-05-15 12:13:41.748058] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:13.364 [2024-05-15 12:13:41.748064] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:13.364 [2024-05-15 12:13:41.748069] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748083] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.364 [2024-05-15 12:13:41.748134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.364 [2024-05-15 12:13:41.748143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.364 [2024-05-15 12:13:41.748151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.364 [2024-05-15 12:13:41.748157] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748166] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748197] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:13.364 [2024-05-15 12:13:41.748207] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748222] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748286] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748304] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:13.364 [2024-05-15 12:13:41.748310] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:13.364 [2024-05-15 12:13:41.748317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748341] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:13.364 [2024-05-15 12:13:41.748355] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748364] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748372] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:13.364 [2024-05-15 12:13:41.748378] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:13.364 [2024-05-15 12:13:41.748385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748411] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748427] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:13.364 [2024-05-15 12:13:41.748433] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:13.364 [2024-05-15 12:13:41.748439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748463] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748471] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748480] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748488] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748494] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748501] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:13.364 [2024-05-15 12:13:41.748507] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:13.364 [2024-05-15 12:13:41.748515] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:13.364 [2024-05-15 12:13:41.748537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:13.364 [2024-05-15 12:13:41.748635] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:13.364 [2024-05-15 12:13:41.748640] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:13.364 [2024-05-15 12:13:41.748645] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:13.364 [2024-05-15 12:13:41.748650] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:13.364 [2024-05-15 12:13:41.748656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:13.364 [2024-05-15 12:13:41.748664] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:13.364 [2024-05-15 12:13:41.748670] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:13.364 [2024-05-15 12:13:41.748677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748684] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:13.364 [2024-05-15 12:13:41.748690] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:13.364 [2024-05-15 12:13:41.748697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:13.364 [2024-05-15 12:13:41.748707] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:13.364 [2024-05-15 12:13:41.748713] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:13.365 [2024-05-15 12:13:41.748720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:13.365 [2024-05-15 12:13:41.748727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:13.365 [2024-05-15 12:13:41.748743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:13.365 [2024-05-15 12:13:41.748754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:13.365 [2024-05-15 12:13:41.748765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:13.365 ===================================================== 00:12:13.365 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:13.365 ===================================================== 00:12:13.365 Controller Capabilities/Features 00:12:13.365 ================================ 00:12:13.365 Vendor ID: 4e58 00:12:13.365 Subsystem Vendor ID: 4e58 00:12:13.365 Serial Number: SPDK1 00:12:13.365 Model Number: SPDK bdev Controller 00:12:13.365 Firmware Version: 24.05 00:12:13.365 Recommended Arb Burst: 6 00:12:13.365 IEEE OUI Identifier: 8d 6b 50 00:12:13.365 Multi-path I/O 00:12:13.365 May have multiple subsystem ports: Yes 00:12:13.365 May have multiple controllers: Yes 00:12:13.365 Associated with SR-IOV VF: No 00:12:13.365 Max Data Transfer Size: 131072 00:12:13.365 Max Number of Namespaces: 32 00:12:13.365 Max Number of I/O Queues: 127 00:12:13.365 NVMe Specification Version (VS): 1.3 00:12:13.365 NVMe Specification Version (Identify): 1.3 00:12:13.365 Maximum Queue Entries: 256 00:12:13.365 Contiguous Queues Required: Yes 00:12:13.365 Arbitration Mechanisms Supported 00:12:13.365 Weighted Round Robin: Not Supported 00:12:13.365 Vendor Specific: Not Supported 00:12:13.365 Reset Timeout: 15000 ms 00:12:13.365 Doorbell Stride: 4 bytes 00:12:13.365 NVM Subsystem Reset: Not Supported 00:12:13.365 Command Sets Supported 00:12:13.365 NVM Command Set: Supported 00:12:13.365 Boot Partition: Not Supported 00:12:13.365 Memory Page Size Minimum: 4096 bytes 00:12:13.365 Memory Page Size Maximum: 4096 bytes 00:12:13.365 Persistent Memory Region: Not Supported 00:12:13.365 Optional Asynchronous Events Supported 00:12:13.365 Namespace Attribute Notices: Supported 00:12:13.365 Firmware Activation Notices: Not Supported 00:12:13.365 ANA Change Notices: Not Supported 00:12:13.365 PLE Aggregate Log Change Notices: Not Supported 00:12:13.365 LBA Status Info Alert Notices: Not Supported 00:12:13.365 EGE Aggregate Log Change Notices: Not Supported 00:12:13.365 Normal NVM Subsystem Shutdown event: Not Supported 00:12:13.365 Zone Descriptor Change Notices: Not Supported 00:12:13.365 Discovery Log Change Notices: Not Supported 00:12:13.365 Controller Attributes 00:12:13.365 128-bit Host Identifier: Supported 00:12:13.365 Non-Operational Permissive Mode: Not Supported 00:12:13.365 NVM Sets: Not Supported 00:12:13.365 Read Recovery Levels: Not Supported 00:12:13.365 Endurance Groups: Not Supported 00:12:13.365 Predictable Latency Mode: Not Supported 00:12:13.365 Traffic Based Keep ALive: Not Supported 00:12:13.365 Namespace Granularity: Not Supported 00:12:13.365 SQ Associations: Not Supported 00:12:13.365 UUID List: Not Supported 00:12:13.365 Multi-Domain Subsystem: Not Supported 00:12:13.365 Fixed Capacity Management: Not Supported 00:12:13.365 Variable Capacity Management: Not Supported 00:12:13.365 Delete Endurance Group: Not Supported 00:12:13.365 Delete NVM Set: Not Supported 00:12:13.365 Extended LBA Formats Supported: Not Supported 00:12:13.365 Flexible Data Placement Supported: Not Supported 00:12:13.365 00:12:13.365 Controller Memory Buffer Support 00:12:13.365 ================================ 00:12:13.365 Supported: No 00:12:13.365 00:12:13.365 Persistent Memory Region Support 00:12:13.365 ================================ 00:12:13.365 Supported: No 00:12:13.365 00:12:13.365 Admin Command Set Attributes 00:12:13.365 ============================ 00:12:13.365 Security Send/Receive: Not Supported 00:12:13.365 Format NVM: Not Supported 00:12:13.365 Firmware Activate/Download: Not Supported 00:12:13.365 Namespace Management: Not Supported 00:12:13.365 Device Self-Test: Not Supported 00:12:13.365 Directives: Not Supported 00:12:13.365 NVMe-MI: Not Supported 00:12:13.365 Virtualization Management: Not Supported 00:12:13.365 Doorbell Buffer Config: Not Supported 00:12:13.365 Get LBA Status Capability: Not Supported 00:12:13.365 Command & Feature Lockdown Capability: Not Supported 00:12:13.365 Abort Command Limit: 4 00:12:13.365 Async Event Request Limit: 4 00:12:13.365 Number of Firmware Slots: N/A 00:12:13.365 Firmware Slot 1 Read-Only: N/A 00:12:13.365 Firmware Activation Without Reset: N/A 00:12:13.365 Multiple Update Detection Support: N/A 00:12:13.365 Firmware Update Granularity: No Information Provided 00:12:13.365 Per-Namespace SMART Log: No 00:12:13.365 Asymmetric Namespace Access Log Page: Not Supported 00:12:13.365 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:13.365 Command Effects Log Page: Supported 00:12:13.365 Get Log Page Extended Data: Supported 00:12:13.365 Telemetry Log Pages: Not Supported 00:12:13.365 Persistent Event Log Pages: Not Supported 00:12:13.365 Supported Log Pages Log Page: May Support 00:12:13.365 Commands Supported & Effects Log Page: Not Supported 00:12:13.365 Feature Identifiers & Effects Log Page:May Support 00:12:13.365 NVMe-MI Commands & Effects Log Page: May Support 00:12:13.365 Data Area 4 for Telemetry Log: Not Supported 00:12:13.365 Error Log Page Entries Supported: 128 00:12:13.365 Keep Alive: Supported 00:12:13.365 Keep Alive Granularity: 10000 ms 00:12:13.365 00:12:13.365 NVM Command Set Attributes 00:12:13.365 ========================== 00:12:13.365 Submission Queue Entry Size 00:12:13.365 Max: 64 00:12:13.365 Min: 64 00:12:13.365 Completion Queue Entry Size 00:12:13.365 Max: 16 00:12:13.365 Min: 16 00:12:13.365 Number of Namespaces: 32 00:12:13.365 Compare Command: Supported 00:12:13.365 Write Uncorrectable Command: Not Supported 00:12:13.365 Dataset Management Command: Supported 00:12:13.365 Write Zeroes Command: Supported 00:12:13.365 Set Features Save Field: Not Supported 00:12:13.365 Reservations: Not Supported 00:12:13.365 Timestamp: Not Supported 00:12:13.365 Copy: Supported 00:12:13.365 Volatile Write Cache: Present 00:12:13.365 Atomic Write Unit (Normal): 1 00:12:13.365 Atomic Write Unit (PFail): 1 00:12:13.365 Atomic Compare & Write Unit: 1 00:12:13.365 Fused Compare & Write: Supported 00:12:13.365 Scatter-Gather List 00:12:13.365 SGL Command Set: Supported (Dword aligned) 00:12:13.365 SGL Keyed: Not Supported 00:12:13.365 SGL Bit Bucket Descriptor: Not Supported 00:12:13.365 SGL Metadata Pointer: Not Supported 00:12:13.365 Oversized SGL: Not Supported 00:12:13.365 SGL Metadata Address: Not Supported 00:12:13.365 SGL Offset: Not Supported 00:12:13.365 Transport SGL Data Block: Not Supported 00:12:13.365 Replay Protected Memory Block: Not Supported 00:12:13.365 00:12:13.365 Firmware Slot Information 00:12:13.365 ========================= 00:12:13.365 Active slot: 1 00:12:13.365 Slot 1 Firmware Revision: 24.05 00:12:13.365 00:12:13.365 00:12:13.365 Commands Supported and Effects 00:12:13.365 ============================== 00:12:13.365 Admin Commands 00:12:13.365 -------------- 00:12:13.365 Get Log Page (02h): Supported 00:12:13.365 Identify (06h): Supported 00:12:13.365 Abort (08h): Supported 00:12:13.365 Set Features (09h): Supported 00:12:13.365 Get Features (0Ah): Supported 00:12:13.365 Asynchronous Event Request (0Ch): Supported 00:12:13.365 Keep Alive (18h): Supported 00:12:13.365 I/O Commands 00:12:13.365 ------------ 00:12:13.365 Flush (00h): Supported LBA-Change 00:12:13.365 Write (01h): Supported LBA-Change 00:12:13.365 Read (02h): Supported 00:12:13.365 Compare (05h): Supported 00:12:13.365 Write Zeroes (08h): Supported LBA-Change 00:12:13.365 Dataset Management (09h): Supported LBA-Change 00:12:13.365 Copy (19h): Supported LBA-Change 00:12:13.365 Unknown (79h): Supported LBA-Change 00:12:13.365 Unknown (7Ah): Supported 00:12:13.365 00:12:13.365 Error Log 00:12:13.365 ========= 00:12:13.365 00:12:13.365 Arbitration 00:12:13.365 =========== 00:12:13.366 Arbitration Burst: 1 00:12:13.366 00:12:13.366 Power Management 00:12:13.366 ================ 00:12:13.366 Number of Power States: 1 00:12:13.366 Current Power State: Power State #0 00:12:13.366 Power State #0: 00:12:13.366 Max Power: 0.00 W 00:12:13.366 Non-Operational State: Operational 00:12:13.366 Entry Latency: Not Reported 00:12:13.366 Exit Latency: Not Reported 00:12:13.366 Relative Read Throughput: 0 00:12:13.366 Relative Read Latency: 0 00:12:13.366 Relative Write Throughput: 0 00:12:13.366 Relative Write Latency: 0 00:12:13.366 Idle Power: Not Reported 00:12:13.366 Active Power: Not Reported 00:12:13.366 Non-Operational Permissive Mode: Not Supported 00:12:13.366 00:12:13.366 Health Information 00:12:13.366 ================== 00:12:13.366 Critical Warnings: 00:12:13.366 Available Spare Space: OK 00:12:13.366 Temperature: OK 00:12:13.366 Device Reliability: OK 00:12:13.366 Read Only: No 00:12:13.366 Volatile Memory Backup: OK 00:12:13.366 Current Temperature: 0 Kelvin (-2[2024-05-15 12:13:41.748852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:13.366 [2024-05-15 12:13:41.748865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:13.366 [2024-05-15 12:13:41.748891] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:13.366 [2024-05-15 12:13:41.748901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.366 [2024-05-15 12:13:41.748909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.366 [2024-05-15 12:13:41.748916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.366 [2024-05-15 12:13:41.748924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.366 [2024-05-15 12:13:41.749900] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:13.366 [2024-05-15 12:13:41.749913] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:13.366 [2024-05-15 12:13:41.750899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:13.366 [2024-05-15 12:13:41.751205] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:13.366 [2024-05-15 12:13:41.751214] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:13.366 [2024-05-15 12:13:41.751912] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:13.366 [2024-05-15 12:13:41.751926] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:13.366 [2024-05-15 12:13:41.751975] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:13.366 [2024-05-15 12:13:41.756202] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:13.366 73 Celsius) 00:12:13.366 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:13.366 Available Spare: 0% 00:12:13.366 Available Spare Threshold: 0% 00:12:13.366 Life Percentage Used: 0% 00:12:13.366 Data Units Read: 0 00:12:13.366 Data Units Written: 0 00:12:13.366 Host Read Commands: 0 00:12:13.366 Host Write Commands: 0 00:12:13.366 Controller Busy Time: 0 minutes 00:12:13.366 Power Cycles: 0 00:12:13.366 Power On Hours: 0 hours 00:12:13.366 Unsafe Shutdowns: 0 00:12:13.366 Unrecoverable Media Errors: 0 00:12:13.366 Lifetime Error Log Entries: 0 00:12:13.366 Warning Temperature Time: 0 minutes 00:12:13.366 Critical Temperature Time: 0 minutes 00:12:13.366 00:12:13.366 Number of Queues 00:12:13.366 ================ 00:12:13.366 Number of I/O Submission Queues: 127 00:12:13.366 Number of I/O Completion Queues: 127 00:12:13.366 00:12:13.366 Active Namespaces 00:12:13.366 ================= 00:12:13.366 Namespace ID:1 00:12:13.366 Error Recovery Timeout: Unlimited 00:12:13.366 Command Set Identifier: NVM (00h) 00:12:13.366 Deallocate: Supported 00:12:13.366 Deallocated/Unwritten Error: Not Supported 00:12:13.366 Deallocated Read Value: Unknown 00:12:13.366 Deallocate in Write Zeroes: Not Supported 00:12:13.366 Deallocated Guard Field: 0xFFFF 00:12:13.366 Flush: Supported 00:12:13.366 Reservation: Supported 00:12:13.366 Namespace Sharing Capabilities: Multiple Controllers 00:12:13.366 Size (in LBAs): 131072 (0GiB) 00:12:13.366 Capacity (in LBAs): 131072 (0GiB) 00:12:13.366 Utilization (in LBAs): 131072 (0GiB) 00:12:13.366 NGUID: 64C7E7423D2F4B7FAD727410806C68C5 00:12:13.366 UUID: 64c7e742-3d2f-4b7f-ad72-7410806c68c5 00:12:13.366 Thin Provisioning: Not Supported 00:12:13.366 Per-NS Atomic Units: Yes 00:12:13.366 Atomic Boundary Size (Normal): 0 00:12:13.366 Atomic Boundary Size (PFail): 0 00:12:13.366 Atomic Boundary Offset: 0 00:12:13.366 Maximum Single Source Range Length: 65535 00:12:13.366 Maximum Copy Length: 65535 00:12:13.366 Maximum Source Range Count: 1 00:12:13.366 NGUID/EUI64 Never Reused: No 00:12:13.366 Namespace Write Protected: No 00:12:13.366 Number of LBA Formats: 1 00:12:13.366 Current LBA Format: LBA Format #00 00:12:13.366 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:13.366 00:12:13.366 12:13:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:13.366 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.624 [2024-05-15 12:13:41.976016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:18.889 Initializing NVMe Controllers 00:12:18.889 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:18.889 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:18.889 Initialization complete. Launching workers. 00:12:18.889 ======================================================== 00:12:18.889 Latency(us) 00:12:18.889 Device Information : IOPS MiB/s Average min max 00:12:18.889 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40014.40 156.31 3201.32 902.83 8664.23 00:12:18.889 ======================================================== 00:12:18.889 Total : 40014.40 156.31 3201.32 902.83 8664.23 00:12:18.889 00:12:18.889 [2024-05-15 12:13:46.997227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:18.889 12:13:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:18.889 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.889 [2024-05-15 12:13:47.216234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.217 Initializing NVMe Controllers 00:12:24.217 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.217 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:24.218 Initialization complete. Launching workers. 00:12:24.218 ======================================================== 00:12:24.218 Latency(us) 00:12:24.218 Device Information : IOPS MiB/s Average min max 00:12:24.218 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.33 62.68 7976.74 5982.24 8981.17 00:12:24.218 ======================================================== 00:12:24.218 Total : 16045.33 62.68 7976.74 5982.24 8981.17 00:12:24.218 00:12:24.218 [2024-05-15 12:13:52.250031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.218 12:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:24.218 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.218 [2024-05-15 12:13:52.465026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.488 [2024-05-15 12:13:57.584729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.488 Initializing NVMe Controllers 00:12:29.488 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:29.488 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:29.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:29.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:29.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:29.488 Initialization complete. Launching workers. 00:12:29.488 Starting thread on core 2 00:12:29.488 Starting thread on core 3 00:12:29.488 Starting thread on core 1 00:12:29.488 12:13:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:29.488 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.488 [2024-05-15 12:13:57.885542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:33.677 [2024-05-15 12:14:01.699427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.677 Initializing NVMe Controllers 00:12:33.677 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.677 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.677 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:33.677 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:33.677 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:33.677 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:33.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:33.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:33.677 Initialization complete. Launching workers. 00:12:33.677 Starting thread on core 1 with urgent priority queue 00:12:33.677 Starting thread on core 2 with urgent priority queue 00:12:33.677 Starting thread on core 3 with urgent priority queue 00:12:33.677 Starting thread on core 0 with urgent priority queue 00:12:33.677 SPDK bdev Controller (SPDK1 ) core 0: 6744.33 IO/s 14.83 secs/100000 ios 00:12:33.677 SPDK bdev Controller (SPDK1 ) core 1: 6815.33 IO/s 14.67 secs/100000 ios 00:12:33.677 SPDK bdev Controller (SPDK1 ) core 2: 7203.00 IO/s 13.88 secs/100000 ios 00:12:33.677 SPDK bdev Controller (SPDK1 ) core 3: 6851.67 IO/s 14.59 secs/100000 ios 00:12:33.677 ======================================================== 00:12:33.677 00:12:33.677 12:14:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:33.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.677 [2024-05-15 12:14:01.984494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:33.677 Initializing NVMe Controllers 00:12:33.677 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.677 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.677 Namespace ID: 1 size: 0GB 00:12:33.677 Initialization complete. 00:12:33.677 INFO: using host memory buffer for IO 00:12:33.677 Hello world! 00:12:33.677 [2024-05-15 12:14:02.020861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.677 12:14:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:33.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.936 [2024-05-15 12:14:02.304629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:34.870 Initializing NVMe Controllers 00:12:34.870 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:34.870 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:34.870 Initialization complete. Launching workers. 00:12:34.870 submit (in ns) avg, min, max = 7530.1, 3028.8, 4001252.8 00:12:34.870 complete (in ns) avg, min, max = 20435.3, 1675.2, 3999249.6 00:12:34.870 00:12:34.870 Submit histogram 00:12:34.870 ================ 00:12:34.870 Range in us Cumulative Count 00:12:34.870 3.021 - 3.034: 0.0059% ( 1) 00:12:34.870 3.034 - 3.046: 0.0177% ( 2) 00:12:34.870 3.046 - 3.059: 0.0414% ( 4) 00:12:34.870 3.059 - 3.072: 0.0827% ( 7) 00:12:34.870 3.072 - 3.085: 0.1359% ( 9) 00:12:34.870 3.085 - 3.098: 0.5555% ( 71) 00:12:34.870 3.098 - 3.110: 1.6784% ( 190) 00:12:34.870 3.110 - 3.123: 3.7114% ( 344) 00:12:34.870 3.123 - 3.136: 6.7136% ( 508) 00:12:34.870 3.136 - 3.149: 11.1045% ( 743) 00:12:34.870 3.149 - 3.162: 16.3525% ( 888) 00:12:34.870 3.162 - 3.174: 22.1323% ( 978) 00:12:34.870 3.174 - 3.187: 27.7229% ( 946) 00:12:34.870 3.187 - 3.200: 33.1777% ( 923) 00:12:34.870 3.200 - 3.213: 39.0107% ( 987) 00:12:34.870 3.213 - 3.226: 45.4169% ( 1084) 00:12:34.870 3.226 - 3.238: 51.6754% ( 1059) 00:12:34.870 3.238 - 3.251: 55.9600% ( 725) 00:12:34.870 3.251 - 3.264: 59.1159% ( 534) 00:12:34.870 3.264 - 3.277: 61.8344% ( 460) 00:12:34.870 3.277 - 3.302: 68.0634% ( 1054) 00:12:34.870 3.302 - 3.328: 73.6481% ( 945) 00:12:34.870 3.328 - 3.354: 80.0544% ( 1084) 00:12:34.870 3.354 - 3.379: 85.8283% ( 977) 00:12:34.870 3.379 - 3.405: 87.5244% ( 287) 00:12:34.871 3.405 - 3.430: 88.3695% ( 143) 00:12:34.871 3.430 - 3.456: 89.2914% ( 156) 00:12:34.871 3.456 - 3.482: 90.6979% ( 238) 00:12:34.871 3.482 - 3.507: 92.3645% ( 282) 00:12:34.871 3.507 - 3.533: 94.0429% ( 284) 00:12:34.871 3.533 - 3.558: 95.2899% ( 211) 00:12:34.871 3.558 - 3.584: 96.3832% ( 185) 00:12:34.871 3.584 - 3.610: 97.4942% ( 188) 00:12:34.871 3.610 - 3.635: 98.3984% ( 153) 00:12:34.871 3.635 - 3.661: 98.8594% ( 78) 00:12:34.871 3.661 - 3.686: 99.1667% ( 52) 00:12:34.871 3.686 - 3.712: 99.4267% ( 44) 00:12:34.871 3.712 - 3.738: 99.5272% ( 17) 00:12:34.871 3.738 - 3.763: 99.5686% ( 7) 00:12:34.871 3.763 - 3.789: 99.5863% ( 3) 00:12:34.871 3.789 - 3.814: 99.5981% ( 2) 00:12:34.871 3.814 - 3.840: 99.6040% ( 1) 00:12:34.871 3.840 - 3.866: 99.6100% ( 1) 00:12:34.871 3.866 - 3.891: 99.6218% ( 2) 00:12:34.871 3.891 - 3.917: 99.6336% ( 2) 00:12:34.871 3.994 - 4.019: 99.6395% ( 1) 00:12:34.871 4.019 - 4.045: 99.6454% ( 1) 00:12:34.871 4.045 - 4.070: 99.6572% ( 2) 00:12:34.871 4.403 - 4.429: 99.6631% ( 1) 00:12:34.871 5.581 - 5.606: 99.6691% ( 1) 00:12:34.871 5.658 - 5.683: 99.6750% ( 1) 00:12:34.871 6.374 - 6.400: 99.6809% ( 1) 00:12:34.871 6.502 - 6.528: 99.6868% ( 1) 00:12:34.871 6.528 - 6.554: 99.6986% ( 2) 00:12:34.871 6.554 - 6.605: 99.7104% ( 2) 00:12:34.871 6.656 - 6.707: 99.7341% ( 4) 00:12:34.871 6.707 - 6.758: 99.7400% ( 1) 00:12:34.871 6.758 - 6.810: 99.7459% ( 1) 00:12:34.871 6.912 - 6.963: 99.7518% ( 1) 00:12:34.871 6.963 - 7.014: 99.7813% ( 5) 00:12:34.871 7.066 - 7.117: 99.7932% ( 2) 00:12:34.871 7.219 - 7.270: 99.8050% ( 2) 00:12:34.871 7.322 - 7.373: 99.8109% ( 1) 00:12:34.871 7.373 - 7.424: 99.8168% ( 1) 00:12:34.871 7.424 - 7.475: 99.8227% ( 1) 00:12:34.871 7.475 - 7.526: 99.8345% ( 2) 00:12:34.871 7.526 - 7.578: 99.8404% ( 1) 00:12:34.871 7.680 - 7.731: 99.8523% ( 2) 00:12:34.871 7.834 - 7.885: 99.8582% ( 1) 00:12:34.871 7.885 - 7.936: 99.8641% ( 1) 00:12:34.871 8.141 - 8.192: 99.8700% ( 1) 00:12:34.871 8.499 - 8.550: 99.8759% ( 1) 00:12:34.871 9.933 - 9.984: 99.8818% ( 1) 00:12:34.871 11.930 - 11.981: 99.8877% ( 1) 00:12:34.871 13.619 - 13.722: 99.8936% ( 1) 00:12:34.871 3984.589 - 4010.803: 100.0000% ( 18) 00:12:34.871 00:12:34.871 Complete histogram 00:12:34.871 ================== 00:12:34.871 Range in us Cumulative Count 00:12:34.871 1.664 - 1.677: 0.0118% ( 2) 00:12:34.871 1.677 - [2024-05-15 12:14:03.326504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:34.871 1.690: 0.0473% ( 6) 00:12:34.871 1.690 - 1.702: 0.0591% ( 2) 00:12:34.871 1.702 - 1.715: 0.1359% ( 13) 00:12:34.871 1.715 - 1.728: 2.3107% ( 368) 00:12:34.871 1.728 - 1.741: 5.8093% ( 592) 00:12:34.871 1.741 - 1.754: 7.2691% ( 247) 00:12:34.871 1.754 - 1.766: 20.5071% ( 2240) 00:12:34.871 1.766 - 1.779: 70.1968% ( 8408) 00:12:34.871 1.779 - 1.792: 87.6248% ( 2949) 00:12:34.871 1.792 - 1.805: 93.2333% ( 949) 00:12:34.871 1.805 - 1.818: 95.7863% ( 432) 00:12:34.871 1.818 - 1.830: 96.5900% ( 136) 00:12:34.871 1.830 - 1.843: 97.6006% ( 171) 00:12:34.871 1.843 - 1.856: 98.6171% ( 172) 00:12:34.871 1.856 - 1.869: 98.9126% ( 50) 00:12:34.871 1.869 - 1.882: 99.0131% ( 17) 00:12:34.871 1.882 - 1.894: 99.0426% ( 5) 00:12:34.871 1.894 - 1.907: 99.0485% ( 1) 00:12:34.871 1.907 - 1.920: 99.0603% ( 2) 00:12:34.871 1.920 - 1.933: 99.1017% ( 7) 00:12:34.871 1.933 - 1.946: 99.1135% ( 2) 00:12:34.871 1.946 - 1.958: 99.1372% ( 4) 00:12:34.871 1.958 - 1.971: 99.1549% ( 3) 00:12:34.871 1.971 - 1.984: 99.1785% ( 4) 00:12:34.871 1.984 - 1.997: 99.1844% ( 1) 00:12:34.871 1.997 - 2.010: 99.1904% ( 1) 00:12:34.871 2.010 - 2.022: 99.1963% ( 1) 00:12:34.871 2.022 - 2.035: 99.2140% ( 3) 00:12:34.871 2.035 - 2.048: 99.2199% ( 1) 00:12:34.871 2.074 - 2.086: 99.2317% ( 2) 00:12:34.871 2.099 - 2.112: 99.2376% ( 1) 00:12:34.871 2.112 - 2.125: 99.2435% ( 1) 00:12:34.871 2.138 - 2.150: 99.2495% ( 1) 00:12:34.871 2.176 - 2.189: 99.2554% ( 1) 00:12:34.871 2.189 - 2.202: 99.2613% ( 1) 00:12:34.871 2.202 - 2.214: 99.2731% ( 2) 00:12:34.871 2.266 - 2.278: 99.2790% ( 1) 00:12:34.871 3.891 - 3.917: 99.2849% ( 1) 00:12:34.871 4.019 - 4.045: 99.2908% ( 1) 00:12:34.871 4.147 - 4.173: 99.2967% ( 1) 00:12:34.871 4.198 - 4.224: 99.3086% ( 2) 00:12:34.871 4.890 - 4.915: 99.3145% ( 1) 00:12:34.871 4.941 - 4.966: 99.3204% ( 1) 00:12:34.871 4.992 - 5.018: 99.3263% ( 1) 00:12:34.871 5.018 - 5.043: 99.3322% ( 1) 00:12:34.871 5.043 - 5.069: 99.3381% ( 1) 00:12:34.871 5.146 - 5.171: 99.3499% ( 2) 00:12:34.871 5.248 - 5.274: 99.3558% ( 1) 00:12:34.871 5.376 - 5.402: 99.3617% ( 1) 00:12:34.871 5.427 - 5.453: 99.3676% ( 1) 00:12:34.871 5.581 - 5.606: 99.3795% ( 2) 00:12:34.871 5.658 - 5.683: 99.3854% ( 1) 00:12:34.871 5.760 - 5.786: 99.3913% ( 1) 00:12:34.871 5.811 - 5.837: 99.3972% ( 1) 00:12:34.871 5.837 - 5.862: 99.4031% ( 1) 00:12:34.871 5.939 - 5.965: 99.4090% ( 1) 00:12:34.871 5.965 - 5.990: 99.4149% ( 1) 00:12:34.871 6.042 - 6.067: 99.4208% ( 1) 00:12:34.871 6.067 - 6.093: 99.4267% ( 1) 00:12:34.871 6.093 - 6.118: 99.4327% ( 1) 00:12:34.871 6.246 - 6.272: 99.4386% ( 1) 00:12:34.871 6.298 - 6.323: 99.4445% ( 1) 00:12:34.871 6.502 - 6.528: 99.4504% ( 1) 00:12:34.871 6.605 - 6.656: 99.4563% ( 1) 00:12:34.871 6.656 - 6.707: 99.4622% ( 1) 00:12:34.871 6.707 - 6.758: 99.4681% ( 1) 00:12:34.871 7.014 - 7.066: 99.4740% ( 1) 00:12:34.871 7.066 - 7.117: 99.4858% ( 2) 00:12:34.871 7.270 - 7.322: 99.4918% ( 1) 00:12:34.871 7.834 - 7.885: 99.4977% ( 1) 00:12:34.871 8.346 - 8.397: 99.5036% ( 1) 00:12:34.871 8.448 - 8.499: 99.5095% ( 1) 00:12:34.871 11.520 - 11.571: 99.5154% ( 1) 00:12:34.871 11.622 - 11.674: 99.5213% ( 1) 00:12:34.871 12.134 - 12.186: 99.5272% ( 1) 00:12:34.871 17.613 - 17.715: 99.5331% ( 1) 00:12:34.871 3905.946 - 3932.160: 99.5390% ( 1) 00:12:34.871 3984.589 - 4010.803: 100.0000% ( 78) 00:12:34.871 00:12:34.871 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:34.871 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:34.871 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:34.871 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:34.871 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:35.129 [ 00:12:35.129 { 00:12:35.129 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:35.129 "subtype": "Discovery", 00:12:35.129 "listen_addresses": [], 00:12:35.129 "allow_any_host": true, 00:12:35.129 "hosts": [] 00:12:35.129 }, 00:12:35.129 { 00:12:35.129 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:35.129 "subtype": "NVMe", 00:12:35.129 "listen_addresses": [ 00:12:35.129 { 00:12:35.129 "trtype": "VFIOUSER", 00:12:35.129 "adrfam": "IPv4", 00:12:35.129 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:35.129 "trsvcid": "0" 00:12:35.129 } 00:12:35.129 ], 00:12:35.129 "allow_any_host": true, 00:12:35.129 "hosts": [], 00:12:35.129 "serial_number": "SPDK1", 00:12:35.129 "model_number": "SPDK bdev Controller", 00:12:35.129 "max_namespaces": 32, 00:12:35.129 "min_cntlid": 1, 00:12:35.129 "max_cntlid": 65519, 00:12:35.129 "namespaces": [ 00:12:35.129 { 00:12:35.129 "nsid": 1, 00:12:35.129 "bdev_name": "Malloc1", 00:12:35.129 "name": "Malloc1", 00:12:35.129 "nguid": "64C7E7423D2F4B7FAD727410806C68C5", 00:12:35.129 "uuid": "64c7e742-3d2f-4b7f-ad72-7410806c68c5" 00:12:35.129 } 00:12:35.129 ] 00:12:35.129 }, 00:12:35.129 { 00:12:35.129 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:35.129 "subtype": "NVMe", 00:12:35.129 "listen_addresses": [ 00:12:35.129 { 00:12:35.129 "trtype": "VFIOUSER", 00:12:35.129 "adrfam": "IPv4", 00:12:35.129 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:35.129 "trsvcid": "0" 00:12:35.129 } 00:12:35.129 ], 00:12:35.129 "allow_any_host": true, 00:12:35.129 "hosts": [], 00:12:35.129 "serial_number": "SPDK2", 00:12:35.129 "model_number": "SPDK bdev Controller", 00:12:35.129 "max_namespaces": 32, 00:12:35.129 "min_cntlid": 1, 00:12:35.129 "max_cntlid": 65519, 00:12:35.129 "namespaces": [ 00:12:35.129 { 00:12:35.129 "nsid": 1, 00:12:35.129 "bdev_name": "Malloc2", 00:12:35.129 "name": "Malloc2", 00:12:35.129 "nguid": "3F8D02A652834D2FB33FD8C37882DFEA", 00:12:35.129 "uuid": "3f8d02a6-5283-4d2f-b33f-d8c37882dfea" 00:12:35.129 } 00:12:35.129 ] 00:12:35.129 } 00:12:35.129 ] 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2053852 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:35.129 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:35.129 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.387 [2024-05-15 12:14:03.730608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:35.387 Malloc3 00:12:35.387 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:35.646 [2024-05-15 12:14:03.936157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:35.646 12:14:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:35.646 Asynchronous Event Request test 00:12:35.646 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.646 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.646 Registering asynchronous event callbacks... 00:12:35.646 Starting namespace attribute notice tests for all controllers... 00:12:35.646 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:35.646 aer_cb - Changed Namespace 00:12:35.646 Cleaning up... 00:12:35.646 [ 00:12:35.646 { 00:12:35.646 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:35.646 "subtype": "Discovery", 00:12:35.646 "listen_addresses": [], 00:12:35.646 "allow_any_host": true, 00:12:35.646 "hosts": [] 00:12:35.646 }, 00:12:35.646 { 00:12:35.646 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:35.646 "subtype": "NVMe", 00:12:35.646 "listen_addresses": [ 00:12:35.646 { 00:12:35.646 "trtype": "VFIOUSER", 00:12:35.646 "adrfam": "IPv4", 00:12:35.646 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:35.646 "trsvcid": "0" 00:12:35.646 } 00:12:35.646 ], 00:12:35.646 "allow_any_host": true, 00:12:35.646 "hosts": [], 00:12:35.646 "serial_number": "SPDK1", 00:12:35.646 "model_number": "SPDK bdev Controller", 00:12:35.646 "max_namespaces": 32, 00:12:35.646 "min_cntlid": 1, 00:12:35.646 "max_cntlid": 65519, 00:12:35.646 "namespaces": [ 00:12:35.646 { 00:12:35.646 "nsid": 1, 00:12:35.646 "bdev_name": "Malloc1", 00:12:35.646 "name": "Malloc1", 00:12:35.646 "nguid": "64C7E7423D2F4B7FAD727410806C68C5", 00:12:35.646 "uuid": "64c7e742-3d2f-4b7f-ad72-7410806c68c5" 00:12:35.646 }, 00:12:35.646 { 00:12:35.646 "nsid": 2, 00:12:35.646 "bdev_name": "Malloc3", 00:12:35.646 "name": "Malloc3", 00:12:35.646 "nguid": "4B3E9561506342A7A836CB1DB0637432", 00:12:35.646 "uuid": "4b3e9561-5063-42a7-a836-cb1db0637432" 00:12:35.646 } 00:12:35.646 ] 00:12:35.646 }, 00:12:35.646 { 00:12:35.646 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:35.646 "subtype": "NVMe", 00:12:35.646 "listen_addresses": [ 00:12:35.646 { 00:12:35.646 "trtype": "VFIOUSER", 00:12:35.646 "adrfam": "IPv4", 00:12:35.646 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:35.646 "trsvcid": "0" 00:12:35.646 } 00:12:35.646 ], 00:12:35.646 "allow_any_host": true, 00:12:35.646 "hosts": [], 00:12:35.646 "serial_number": "SPDK2", 00:12:35.646 "model_number": "SPDK bdev Controller", 00:12:35.646 "max_namespaces": 32, 00:12:35.646 "min_cntlid": 1, 00:12:35.646 "max_cntlid": 65519, 00:12:35.646 "namespaces": [ 00:12:35.646 { 00:12:35.646 "nsid": 1, 00:12:35.646 "bdev_name": "Malloc2", 00:12:35.646 "name": "Malloc2", 00:12:35.646 "nguid": "3F8D02A652834D2FB33FD8C37882DFEA", 00:12:35.646 "uuid": "3f8d02a6-5283-4d2f-b33f-d8c37882dfea" 00:12:35.646 } 00:12:35.646 ] 00:12:35.646 } 00:12:35.646 ] 00:12:35.646 12:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2053852 00:12:35.646 12:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.646 12:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:35.646 12:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:35.646 12:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:35.646 [2024-05-15 12:14:04.162210] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:12:35.646 [2024-05-15 12:14:04.162256] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054040 ] 00:12:35.646 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.907 [2024-05-15 12:14:04.192466] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:35.907 [2024-05-15 12:14:04.202439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:35.907 [2024-05-15 12:14:04.202460] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7feaf5d07000 00:12:35.907 [2024-05-15 12:14:04.203440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.907 [2024-05-15 12:14:04.204444] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.907 [2024-05-15 12:14:04.205454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.907 [2024-05-15 12:14:04.206465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.907 [2024-05-15 12:14:04.207468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.907 [2024-05-15 12:14:04.208468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.907 [2024-05-15 12:14:04.209475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.907 [2024-05-15 12:14:04.210482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.907 [2024-05-15 12:14:04.211497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:35.907 [2024-05-15 12:14:04.211512] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7feaf5cfc000 00:12:35.907 [2024-05-15 12:14:04.212405] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:35.907 [2024-05-15 12:14:04.225288] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:35.907 [2024-05-15 12:14:04.225316] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:35.907 [2024-05-15 12:14:04.227380] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:35.907 [2024-05-15 12:14:04.227416] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:35.907 [2024-05-15 12:14:04.227484] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:35.907 [2024-05-15 12:14:04.227503] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:35.907 [2024-05-15 12:14:04.227511] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:35.907 [2024-05-15 12:14:04.228383] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:35.907 [2024-05-15 12:14:04.228394] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:35.907 [2024-05-15 12:14:04.228403] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:35.907 [2024-05-15 12:14:04.229395] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:35.907 [2024-05-15 12:14:04.229407] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:35.907 [2024-05-15 12:14:04.229416] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:35.907 [2024-05-15 12:14:04.230394] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:35.907 [2024-05-15 12:14:04.230404] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:35.908 [2024-05-15 12:14:04.231400] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:35.908 [2024-05-15 12:14:04.231410] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:35.908 [2024-05-15 12:14:04.231417] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:35.908 [2024-05-15 12:14:04.231427] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:35.908 [2024-05-15 12:14:04.231534] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:35.908 [2024-05-15 12:14:04.231540] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:35.908 [2024-05-15 12:14:04.231550] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:35.908 [2024-05-15 12:14:04.233197] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:35.908 [2024-05-15 12:14:04.233415] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:35.908 [2024-05-15 12:14:04.234429] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:35.908 [2024-05-15 12:14:04.235432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.908 [2024-05-15 12:14:04.235473] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:35.908 [2024-05-15 12:14:04.236437] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:35.908 [2024-05-15 12:14:04.236448] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:35.908 [2024-05-15 12:14:04.236454] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.236473] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:35.908 [2024-05-15 12:14:04.236486] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.236502] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.908 [2024-05-15 12:14:04.236508] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.908 [2024-05-15 12:14:04.236521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.244201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.244216] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:35.908 [2024-05-15 12:14:04.244223] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:35.908 [2024-05-15 12:14:04.244228] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:35.908 [2024-05-15 12:14:04.244235] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:35.908 [2024-05-15 12:14:04.244241] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:35.908 [2024-05-15 12:14:04.244247] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:35.908 [2024-05-15 12:14:04.244253] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.244266] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.244279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.252198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.252215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.908 [2024-05-15 12:14:04.252227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.908 [2024-05-15 12:14:04.252236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.908 [2024-05-15 12:14:04.252246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.908 [2024-05-15 12:14:04.252253] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.252261] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.252271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.260199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.260207] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:35.908 [2024-05-15 12:14:04.260216] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.260225] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.260232] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.260241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.268198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.268243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.268252] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.268260] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:35.908 [2024-05-15 12:14:04.268266] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:35.908 [2024-05-15 12:14:04.268274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.276197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.276213] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:35.908 [2024-05-15 12:14:04.276222] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.276231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.276239] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.908 [2024-05-15 12:14:04.276245] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.908 [2024-05-15 12:14:04.276252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.284198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.284211] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.284220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.284228] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.908 [2024-05-15 12:14:04.284234] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.908 [2024-05-15 12:14:04.284241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.292198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.292211] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.292220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.292228] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.292235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.292242] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.292248] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:35.908 [2024-05-15 12:14:04.292254] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:35.908 [2024-05-15 12:14:04.292260] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:35.908 [2024-05-15 12:14:04.292279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.300197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.300212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.308201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:35.908 [2024-05-15 12:14:04.308216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:35.908 [2024-05-15 12:14:04.316198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:35.909 [2024-05-15 12:14:04.316212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:35.909 [2024-05-15 12:14:04.324197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:35.909 [2024-05-15 12:14:04.324211] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:35.909 [2024-05-15 12:14:04.324218] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:35.909 [2024-05-15 12:14:04.324222] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:35.909 [2024-05-15 12:14:04.324229] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:35.909 [2024-05-15 12:14:04.324236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:35.909 [2024-05-15 12:14:04.324244] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:35.909 [2024-05-15 12:14:04.324249] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:35.909 [2024-05-15 12:14:04.324256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:35.909 [2024-05-15 12:14:04.324264] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:35.909 [2024-05-15 12:14:04.324269] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.909 [2024-05-15 12:14:04.324276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.909 [2024-05-15 12:14:04.324286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:35.909 [2024-05-15 12:14:04.324292] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:35.909 [2024-05-15 12:14:04.324299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:35.909 [2024-05-15 12:14:04.332198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:35.909 [2024-05-15 12:14:04.332215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:35.909 [2024-05-15 12:14:04.332226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:35.909 [2024-05-15 12:14:04.332236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:35.909 ===================================================== 00:12:35.909 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.909 ===================================================== 00:12:35.909 Controller Capabilities/Features 00:12:35.909 ================================ 00:12:35.909 Vendor ID: 4e58 00:12:35.909 Subsystem Vendor ID: 4e58 00:12:35.909 Serial Number: SPDK2 00:12:35.909 Model Number: SPDK bdev Controller 00:12:35.909 Firmware Version: 24.05 00:12:35.909 Recommended Arb Burst: 6 00:12:35.909 IEEE OUI Identifier: 8d 6b 50 00:12:35.909 Multi-path I/O 00:12:35.909 May have multiple subsystem ports: Yes 00:12:35.909 May have multiple controllers: Yes 00:12:35.909 Associated with SR-IOV VF: No 00:12:35.909 Max Data Transfer Size: 131072 00:12:35.909 Max Number of Namespaces: 32 00:12:35.909 Max Number of I/O Queues: 127 00:12:35.909 NVMe Specification Version (VS): 1.3 00:12:35.909 NVMe Specification Version (Identify): 1.3 00:12:35.909 Maximum Queue Entries: 256 00:12:35.909 Contiguous Queues Required: Yes 00:12:35.909 Arbitration Mechanisms Supported 00:12:35.909 Weighted Round Robin: Not Supported 00:12:35.909 Vendor Specific: Not Supported 00:12:35.909 Reset Timeout: 15000 ms 00:12:35.909 Doorbell Stride: 4 bytes 00:12:35.909 NVM Subsystem Reset: Not Supported 00:12:35.909 Command Sets Supported 00:12:35.909 NVM Command Set: Supported 00:12:35.909 Boot Partition: Not Supported 00:12:35.909 Memory Page Size Minimum: 4096 bytes 00:12:35.909 Memory Page Size Maximum: 4096 bytes 00:12:35.909 Persistent Memory Region: Not Supported 00:12:35.909 Optional Asynchronous Events Supported 00:12:35.909 Namespace Attribute Notices: Supported 00:12:35.909 Firmware Activation Notices: Not Supported 00:12:35.909 ANA Change Notices: Not Supported 00:12:35.909 PLE Aggregate Log Change Notices: Not Supported 00:12:35.909 LBA Status Info Alert Notices: Not Supported 00:12:35.909 EGE Aggregate Log Change Notices: Not Supported 00:12:35.909 Normal NVM Subsystem Shutdown event: Not Supported 00:12:35.909 Zone Descriptor Change Notices: Not Supported 00:12:35.909 Discovery Log Change Notices: Not Supported 00:12:35.909 Controller Attributes 00:12:35.909 128-bit Host Identifier: Supported 00:12:35.909 Non-Operational Permissive Mode: Not Supported 00:12:35.909 NVM Sets: Not Supported 00:12:35.909 Read Recovery Levels: Not Supported 00:12:35.909 Endurance Groups: Not Supported 00:12:35.909 Predictable Latency Mode: Not Supported 00:12:35.909 Traffic Based Keep ALive: Not Supported 00:12:35.909 Namespace Granularity: Not Supported 00:12:35.909 SQ Associations: Not Supported 00:12:35.909 UUID List: Not Supported 00:12:35.909 Multi-Domain Subsystem: Not Supported 00:12:35.909 Fixed Capacity Management: Not Supported 00:12:35.909 Variable Capacity Management: Not Supported 00:12:35.909 Delete Endurance Group: Not Supported 00:12:35.909 Delete NVM Set: Not Supported 00:12:35.909 Extended LBA Formats Supported: Not Supported 00:12:35.909 Flexible Data Placement Supported: Not Supported 00:12:35.909 00:12:35.909 Controller Memory Buffer Support 00:12:35.909 ================================ 00:12:35.909 Supported: No 00:12:35.909 00:12:35.909 Persistent Memory Region Support 00:12:35.909 ================================ 00:12:35.909 Supported: No 00:12:35.909 00:12:35.909 Admin Command Set Attributes 00:12:35.909 ============================ 00:12:35.909 Security Send/Receive: Not Supported 00:12:35.909 Format NVM: Not Supported 00:12:35.909 Firmware Activate/Download: Not Supported 00:12:35.909 Namespace Management: Not Supported 00:12:35.909 Device Self-Test: Not Supported 00:12:35.909 Directives: Not Supported 00:12:35.909 NVMe-MI: Not Supported 00:12:35.909 Virtualization Management: Not Supported 00:12:35.909 Doorbell Buffer Config: Not Supported 00:12:35.909 Get LBA Status Capability: Not Supported 00:12:35.909 Command & Feature Lockdown Capability: Not Supported 00:12:35.909 Abort Command Limit: 4 00:12:35.909 Async Event Request Limit: 4 00:12:35.909 Number of Firmware Slots: N/A 00:12:35.909 Firmware Slot 1 Read-Only: N/A 00:12:35.909 Firmware Activation Without Reset: N/A 00:12:35.909 Multiple Update Detection Support: N/A 00:12:35.909 Firmware Update Granularity: No Information Provided 00:12:35.909 Per-Namespace SMART Log: No 00:12:35.909 Asymmetric Namespace Access Log Page: Not Supported 00:12:35.909 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:35.909 Command Effects Log Page: Supported 00:12:35.909 Get Log Page Extended Data: Supported 00:12:35.909 Telemetry Log Pages: Not Supported 00:12:35.909 Persistent Event Log Pages: Not Supported 00:12:35.909 Supported Log Pages Log Page: May Support 00:12:35.909 Commands Supported & Effects Log Page: Not Supported 00:12:35.909 Feature Identifiers & Effects Log Page:May Support 00:12:35.909 NVMe-MI Commands & Effects Log Page: May Support 00:12:35.909 Data Area 4 for Telemetry Log: Not Supported 00:12:35.909 Error Log Page Entries Supported: 128 00:12:35.909 Keep Alive: Supported 00:12:35.909 Keep Alive Granularity: 10000 ms 00:12:35.909 00:12:35.909 NVM Command Set Attributes 00:12:35.909 ========================== 00:12:35.909 Submission Queue Entry Size 00:12:35.909 Max: 64 00:12:35.909 Min: 64 00:12:35.909 Completion Queue Entry Size 00:12:35.909 Max: 16 00:12:35.909 Min: 16 00:12:35.909 Number of Namespaces: 32 00:12:35.909 Compare Command: Supported 00:12:35.909 Write Uncorrectable Command: Not Supported 00:12:35.909 Dataset Management Command: Supported 00:12:35.909 Write Zeroes Command: Supported 00:12:35.909 Set Features Save Field: Not Supported 00:12:35.909 Reservations: Not Supported 00:12:35.909 Timestamp: Not Supported 00:12:35.909 Copy: Supported 00:12:35.909 Volatile Write Cache: Present 00:12:35.909 Atomic Write Unit (Normal): 1 00:12:35.909 Atomic Write Unit (PFail): 1 00:12:35.909 Atomic Compare & Write Unit: 1 00:12:35.909 Fused Compare & Write: Supported 00:12:35.909 Scatter-Gather List 00:12:35.909 SGL Command Set: Supported (Dword aligned) 00:12:35.909 SGL Keyed: Not Supported 00:12:35.909 SGL Bit Bucket Descriptor: Not Supported 00:12:35.909 SGL Metadata Pointer: Not Supported 00:12:35.909 Oversized SGL: Not Supported 00:12:35.909 SGL Metadata Address: Not Supported 00:12:35.909 SGL Offset: Not Supported 00:12:35.909 Transport SGL Data Block: Not Supported 00:12:35.909 Replay Protected Memory Block: Not Supported 00:12:35.909 00:12:35.909 Firmware Slot Information 00:12:35.909 ========================= 00:12:35.909 Active slot: 1 00:12:35.909 Slot 1 Firmware Revision: 24.05 00:12:35.909 00:12:35.909 00:12:35.909 Commands Supported and Effects 00:12:35.909 ============================== 00:12:35.909 Admin Commands 00:12:35.909 -------------- 00:12:35.909 Get Log Page (02h): Supported 00:12:35.909 Identify (06h): Supported 00:12:35.909 Abort (08h): Supported 00:12:35.909 Set Features (09h): Supported 00:12:35.909 Get Features (0Ah): Supported 00:12:35.910 Asynchronous Event Request (0Ch): Supported 00:12:35.910 Keep Alive (18h): Supported 00:12:35.910 I/O Commands 00:12:35.910 ------------ 00:12:35.910 Flush (00h): Supported LBA-Change 00:12:35.910 Write (01h): Supported LBA-Change 00:12:35.910 Read (02h): Supported 00:12:35.910 Compare (05h): Supported 00:12:35.910 Write Zeroes (08h): Supported LBA-Change 00:12:35.910 Dataset Management (09h): Supported LBA-Change 00:12:35.910 Copy (19h): Supported LBA-Change 00:12:35.910 Unknown (79h): Supported LBA-Change 00:12:35.910 Unknown (7Ah): Supported 00:12:35.910 00:12:35.910 Error Log 00:12:35.910 ========= 00:12:35.910 00:12:35.910 Arbitration 00:12:35.910 =========== 00:12:35.910 Arbitration Burst: 1 00:12:35.910 00:12:35.910 Power Management 00:12:35.910 ================ 00:12:35.910 Number of Power States: 1 00:12:35.910 Current Power State: Power State #0 00:12:35.910 Power State #0: 00:12:35.910 Max Power: 0.00 W 00:12:35.910 Non-Operational State: Operational 00:12:35.910 Entry Latency: Not Reported 00:12:35.910 Exit Latency: Not Reported 00:12:35.910 Relative Read Throughput: 0 00:12:35.910 Relative Read Latency: 0 00:12:35.910 Relative Write Throughput: 0 00:12:35.910 Relative Write Latency: 0 00:12:35.910 Idle Power: Not Reported 00:12:35.910 Active Power: Not Reported 00:12:35.910 Non-Operational Permissive Mode: Not Supported 00:12:35.910 00:12:35.910 Health Information 00:12:35.910 ================== 00:12:35.910 Critical Warnings: 00:12:35.910 Available Spare Space: OK 00:12:35.910 Temperature: OK 00:12:35.910 Device Reliability: OK 00:12:35.910 Read Only: No 00:12:35.910 Volatile Memory Backup: OK 00:12:35.910 Current Temperature: 0 Kelvin (-2[2024-05-15 12:14:04.332328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:35.910 [2024-05-15 12:14:04.340199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:35.910 [2024-05-15 12:14:04.340231] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:35.910 [2024-05-15 12:14:04.340242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.910 [2024-05-15 12:14:04.340250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.910 [2024-05-15 12:14:04.340257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.910 [2024-05-15 12:14:04.340265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.910 [2024-05-15 12:14:04.340306] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:35.910 [2024-05-15 12:14:04.340317] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:35.910 [2024-05-15 12:14:04.341316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:35.910 [2024-05-15 12:14:04.341360] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:35.910 [2024-05-15 12:14:04.341368] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:35.910 [2024-05-15 12:14:04.342318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:35.910 [2024-05-15 12:14:04.342330] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:35.910 [2024-05-15 12:14:04.342377] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:35.910 [2024-05-15 12:14:04.345199] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:35.910 73 Celsius) 00:12:35.910 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:35.910 Available Spare: 0% 00:12:35.910 Available Spare Threshold: 0% 00:12:35.910 Life Percentage Used: 0% 00:12:35.910 Data Units Read: 0 00:12:35.910 Data Units Written: 0 00:12:35.910 Host Read Commands: 0 00:12:35.910 Host Write Commands: 0 00:12:35.910 Controller Busy Time: 0 minutes 00:12:35.910 Power Cycles: 0 00:12:35.910 Power On Hours: 0 hours 00:12:35.910 Unsafe Shutdowns: 0 00:12:35.910 Unrecoverable Media Errors: 0 00:12:35.910 Lifetime Error Log Entries: 0 00:12:35.910 Warning Temperature Time: 0 minutes 00:12:35.910 Critical Temperature Time: 0 minutes 00:12:35.910 00:12:35.910 Number of Queues 00:12:35.910 ================ 00:12:35.910 Number of I/O Submission Queues: 127 00:12:35.910 Number of I/O Completion Queues: 127 00:12:35.910 00:12:35.910 Active Namespaces 00:12:35.910 ================= 00:12:35.910 Namespace ID:1 00:12:35.910 Error Recovery Timeout: Unlimited 00:12:35.910 Command Set Identifier: NVM (00h) 00:12:35.910 Deallocate: Supported 00:12:35.910 Deallocated/Unwritten Error: Not Supported 00:12:35.910 Deallocated Read Value: Unknown 00:12:35.910 Deallocate in Write Zeroes: Not Supported 00:12:35.910 Deallocated Guard Field: 0xFFFF 00:12:35.910 Flush: Supported 00:12:35.910 Reservation: Supported 00:12:35.910 Namespace Sharing Capabilities: Multiple Controllers 00:12:35.910 Size (in LBAs): 131072 (0GiB) 00:12:35.910 Capacity (in LBAs): 131072 (0GiB) 00:12:35.910 Utilization (in LBAs): 131072 (0GiB) 00:12:35.910 NGUID: 3F8D02A652834D2FB33FD8C37882DFEA 00:12:35.910 UUID: 3f8d02a6-5283-4d2f-b33f-d8c37882dfea 00:12:35.910 Thin Provisioning: Not Supported 00:12:35.910 Per-NS Atomic Units: Yes 00:12:35.910 Atomic Boundary Size (Normal): 0 00:12:35.910 Atomic Boundary Size (PFail): 0 00:12:35.910 Atomic Boundary Offset: 0 00:12:35.910 Maximum Single Source Range Length: 65535 00:12:35.910 Maximum Copy Length: 65535 00:12:35.910 Maximum Source Range Count: 1 00:12:35.910 NGUID/EUI64 Never Reused: No 00:12:35.910 Namespace Write Protected: No 00:12:35.910 Number of LBA Formats: 1 00:12:35.910 Current LBA Format: LBA Format #00 00:12:35.910 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:35.910 00:12:35.910 12:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:35.910 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.169 [2024-05-15 12:14:04.563218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.436 Initializing NVMe Controllers 00:12:41.436 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.436 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:41.436 Initialization complete. Launching workers. 00:12:41.436 ======================================================== 00:12:41.436 Latency(us) 00:12:41.436 Device Information : IOPS MiB/s Average min max 00:12:41.436 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.02 156.08 3203.27 903.38 7345.72 00:12:41.436 ======================================================== 00:12:41.436 Total : 39957.02 156.08 3203.27 903.38 7345.72 00:12:41.436 00:12:41.436 [2024-05-15 12:14:09.670439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.436 12:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:41.436 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.436 [2024-05-15 12:14:09.890094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.698 Initializing NVMe Controllers 00:12:46.698 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.698 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:46.698 Initialization complete. Launching workers. 00:12:46.698 ======================================================== 00:12:46.698 Latency(us) 00:12:46.698 Device Information : IOPS MiB/s Average min max 00:12:46.698 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39919.28 155.93 3206.30 924.39 7094.06 00:12:46.698 ======================================================== 00:12:46.698 Total : 39919.28 155.93 3206.30 924.39 7094.06 00:12:46.698 00:12:46.698 [2024-05-15 12:14:14.910830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.698 12:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:46.698 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.698 [2024-05-15 12:14:15.122379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:52.205 [2024-05-15 12:14:20.257280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.205 Initializing NVMe Controllers 00:12:52.205 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.205 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.205 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:52.205 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:52.205 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:52.205 Initialization complete. Launching workers. 00:12:52.205 Starting thread on core 2 00:12:52.205 Starting thread on core 3 00:12:52.205 Starting thread on core 1 00:12:52.205 12:14:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:52.205 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.205 [2024-05-15 12:14:20.561645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.517 [2024-05-15 12:14:23.628937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.517 Initializing NVMe Controllers 00:12:55.517 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.517 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.517 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:55.517 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:55.517 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:55.517 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:55.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:55.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:55.517 Initialization complete. Launching workers. 00:12:55.517 Starting thread on core 1 with urgent priority queue 00:12:55.517 Starting thread on core 2 with urgent priority queue 00:12:55.517 Starting thread on core 3 with urgent priority queue 00:12:55.517 Starting thread on core 0 with urgent priority queue 00:12:55.517 SPDK bdev Controller (SPDK2 ) core 0: 9004.33 IO/s 11.11 secs/100000 ios 00:12:55.517 SPDK bdev Controller (SPDK2 ) core 1: 7075.67 IO/s 14.13 secs/100000 ios 00:12:55.517 SPDK bdev Controller (SPDK2 ) core 2: 7142.33 IO/s 14.00 secs/100000 ios 00:12:55.517 SPDK bdev Controller (SPDK2 ) core 3: 9359.33 IO/s 10.68 secs/100000 ios 00:12:55.517 ======================================================== 00:12:55.517 00:12:55.517 12:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:55.517 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.517 [2024-05-15 12:14:23.920588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.517 Initializing NVMe Controllers 00:12:55.517 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.517 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.517 Namespace ID: 1 size: 0GB 00:12:55.517 Initialization complete. 00:12:55.517 INFO: using host memory buffer for IO 00:12:55.517 Hello world! 00:12:55.517 [2024-05-15 12:14:23.930641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.517 12:14:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:55.517 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.776 [2024-05-15 12:14:24.222383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.156 Initializing NVMe Controllers 00:12:57.156 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.156 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.156 Initialization complete. Launching workers. 00:12:57.156 submit (in ns) avg, min, max = 8407.1, 3085.6, 4000761.6 00:12:57.156 complete (in ns) avg, min, max = 19016.0, 1751.2, 4000174.4 00:12:57.156 00:12:57.156 Submit histogram 00:12:57.156 ================ 00:12:57.156 Range in us Cumulative Count 00:12:57.156 3.085 - 3.098: 0.1163% ( 20) 00:12:57.157 3.098 - 3.110: 0.7618% ( 111) 00:12:57.157 3.110 - 3.123: 1.8261% ( 183) 00:12:57.157 3.123 - 3.136: 3.4312% ( 276) 00:12:57.157 3.136 - 3.149: 5.8622% ( 418) 00:12:57.157 3.149 - 3.162: 9.5900% ( 641) 00:12:57.157 3.162 - 3.174: 14.1088% ( 777) 00:12:57.157 3.174 - 3.187: 19.0288% ( 846) 00:12:57.157 3.187 - 3.200: 24.6351% ( 964) 00:12:57.157 3.200 - 3.213: 30.6484% ( 1034) 00:12:57.157 3.213 - 3.226: 36.6153% ( 1026) 00:12:57.157 3.226 - 3.238: 43.0532% ( 1107) 00:12:57.157 3.238 - 3.251: 48.1245% ( 872) 00:12:57.157 3.251 - 3.264: 52.4920% ( 751) 00:12:57.157 3.264 - 3.277: 56.6735% ( 719) 00:12:57.157 3.277 - 3.302: 64.0012% ( 1260) 00:12:57.157 3.302 - 3.328: 70.4042% ( 1101) 00:12:57.157 3.328 - 3.354: 76.6851% ( 1080) 00:12:57.157 3.354 - 3.379: 82.7799% ( 1048) 00:12:57.157 3.379 - 3.405: 86.4495% ( 631) 00:12:57.157 3.405 - 3.430: 88.2466% ( 309) 00:12:57.157 3.430 - 3.456: 89.1364% ( 153) 00:12:57.157 3.456 - 3.482: 90.0843% ( 163) 00:12:57.157 3.482 - 3.507: 91.3405% ( 216) 00:12:57.157 3.507 - 3.533: 93.0678% ( 297) 00:12:57.157 3.533 - 3.558: 94.4868% ( 244) 00:12:57.157 3.558 - 3.584: 96.0395% ( 267) 00:12:57.157 3.584 - 3.610: 97.0864% ( 180) 00:12:57.157 3.610 - 3.635: 98.1041% ( 175) 00:12:57.157 3.635 - 3.661: 98.7031% ( 103) 00:12:57.157 3.661 - 3.686: 99.1684% ( 80) 00:12:57.157 3.686 - 3.712: 99.3894% ( 38) 00:12:57.157 3.712 - 3.738: 99.5289% ( 24) 00:12:57.157 3.738 - 3.763: 99.5755% ( 8) 00:12:57.157 3.763 - 3.789: 99.6045% ( 5) 00:12:57.157 3.789 - 3.814: 99.6336% ( 5) 00:12:57.157 5.274 - 5.299: 99.6394% ( 1) 00:12:57.157 5.350 - 5.376: 99.6452% ( 1) 00:12:57.157 5.402 - 5.427: 99.6511% ( 1) 00:12:57.157 5.606 - 5.632: 99.6569% ( 1) 00:12:57.157 5.658 - 5.683: 99.6627% ( 1) 00:12:57.157 5.683 - 5.709: 99.6685% ( 1) 00:12:57.157 5.734 - 5.760: 99.6801% ( 2) 00:12:57.157 5.760 - 5.786: 99.6860% ( 1) 00:12:57.157 5.811 - 5.837: 99.6918% ( 1) 00:12:57.157 5.837 - 5.862: 99.6976% ( 1) 00:12:57.157 5.888 - 5.914: 99.7034% ( 1) 00:12:57.157 5.990 - 6.016: 99.7092% ( 1) 00:12:57.157 6.195 - 6.221: 99.7150% ( 1) 00:12:57.157 6.349 - 6.374: 99.7208% ( 1) 00:12:57.157 6.426 - 6.451: 99.7267% ( 1) 00:12:57.157 6.451 - 6.477: 99.7325% ( 1) 00:12:57.157 6.502 - 6.528: 99.7383% ( 1) 00:12:57.157 6.605 - 6.656: 99.7499% ( 2) 00:12:57.157 6.810 - 6.861: 99.7557% ( 1) 00:12:57.157 6.912 - 6.963: 99.7616% ( 1) 00:12:57.157 6.963 - 7.014: 99.7674% ( 1) 00:12:57.157 7.014 - 7.066: 99.7732% ( 1) 00:12:57.157 7.117 - 7.168: 99.7790% ( 1) 00:12:57.157 7.168 - 7.219: 99.7848% ( 1) 00:12:57.157 7.219 - 7.270: 99.7965% ( 2) 00:12:57.157 7.424 - 7.475: 99.8081% ( 2) 00:12:57.157 7.526 - 7.578: 99.8139% ( 1) 00:12:57.157 7.680 - 7.731: 99.8197% ( 1) 00:12:57.157 7.782 - 7.834: 99.8313% ( 2) 00:12:57.157 7.834 - 7.885: 99.8372% ( 1) 00:12:57.157 8.090 - 8.141: 99.8430% ( 1) 00:12:57.157 8.550 - 8.602: 99.8488% ( 1) 00:12:57.157 8.602 - 8.653: 99.8546% ( 1) 00:12:57.157 9.062 - 9.114: 99.8604% ( 1) 00:12:57.157 10.189 - 10.240: 99.8662% ( 1) 00:12:57.157 10.854 - 10.906: 99.8721% ( 1) 00:12:57.157 3984.589 - 4010.803: 100.0000% ( 22) 00:12:57.157 00:12:57.157 Complete histogram 00:12:57.157 ================== 00:12:57.157 Range in us Cumulative Count 00:12:57.157 1.741 - 1.754: 0.0116% ( 2) 00:12:57.157 1.754 - 1.766: 0.4304% ( 72) 00:12:57.157 1.766 - 1.779: 1.1864% ( 130) 00:12:57.157 1.779 - 1.792: 2.2914% ( 190) 00:12:57.157 1.792 - [2024-05-15 12:14:25.317069] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.157 1.805: 21.6342% ( 3326) 00:12:57.157 1.805 - 1.818: 74.4054% ( 9074) 00:12:57.157 1.818 - 1.830: 89.3050% ( 2562) 00:12:57.157 1.830 - 1.843: 94.4519% ( 885) 00:12:57.157 1.843 - 1.856: 96.4525% ( 344) 00:12:57.157 1.856 - 1.869: 97.2434% ( 136) 00:12:57.157 1.869 - 1.882: 98.2379% ( 171) 00:12:57.157 1.882 - 1.894: 98.9416% ( 121) 00:12:57.157 1.894 - 1.907: 99.1335% ( 33) 00:12:57.157 1.907 - 1.920: 99.2033% ( 12) 00:12:57.157 1.920 - 1.933: 99.2498% ( 8) 00:12:57.157 1.933 - 1.946: 99.2789% ( 5) 00:12:57.157 1.946 - 1.958: 99.2963% ( 3) 00:12:57.157 1.971 - 1.984: 99.3021% ( 1) 00:12:57.157 1.984 - 1.997: 99.3138% ( 2) 00:12:57.157 1.997 - 2.010: 99.3428% ( 5) 00:12:57.157 2.022 - 2.035: 99.3486% ( 1) 00:12:57.157 2.035 - 2.048: 99.3545% ( 1) 00:12:57.157 2.074 - 2.086: 99.3603% ( 1) 00:12:57.157 2.086 - 2.099: 99.3661% ( 1) 00:12:57.157 2.202 - 2.214: 99.3719% ( 1) 00:12:57.157 4.070 - 4.096: 99.3777% ( 1) 00:12:57.157 4.762 - 4.787: 99.3835% ( 1) 00:12:57.157 4.966 - 4.992: 99.3894% ( 1) 00:12:57.157 5.069 - 5.094: 99.3952% ( 1) 00:12:57.157 5.120 - 5.146: 99.4010% ( 1) 00:12:57.157 5.146 - 5.171: 99.4068% ( 1) 00:12:57.157 5.171 - 5.197: 99.4126% ( 1) 00:12:57.157 5.197 - 5.222: 99.4184% ( 1) 00:12:57.157 5.325 - 5.350: 99.4301% ( 2) 00:12:57.157 5.376 - 5.402: 99.4359% ( 1) 00:12:57.157 5.453 - 5.478: 99.4417% ( 1) 00:12:57.157 5.504 - 5.530: 99.4533% ( 2) 00:12:57.157 5.555 - 5.581: 99.4591% ( 1) 00:12:57.157 5.632 - 5.658: 99.4650% ( 1) 00:12:57.157 5.658 - 5.683: 99.4708% ( 1) 00:12:57.157 5.683 - 5.709: 99.4766% ( 1) 00:12:57.157 5.709 - 5.734: 99.4882% ( 2) 00:12:57.157 5.786 - 5.811: 99.4940% ( 1) 00:12:57.157 5.811 - 5.837: 99.4999% ( 1) 00:12:57.157 5.939 - 5.965: 99.5115% ( 2) 00:12:57.157 5.965 - 5.990: 99.5173% ( 1) 00:12:57.157 6.758 - 6.810: 99.5231% ( 1) 00:12:57.157 7.424 - 7.475: 99.5289% ( 1) 00:12:57.157 7.475 - 7.526: 99.5347% ( 1) 00:12:57.157 8.243 - 8.294: 99.5406% ( 1) 00:12:57.157 10.752 - 10.803: 99.5464% ( 1) 00:12:57.157 10.957 - 11.008: 99.5522% ( 1) 00:12:57.157 11.366 - 11.418: 99.5580% ( 1) 00:12:57.157 11.571 - 11.622: 99.5638% ( 1) 00:12:57.157 12.186 - 12.237: 99.5696% ( 1) 00:12:57.157 3984.589 - 4010.803: 100.0000% ( 74) 00:12:57.157 00:12:57.157 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:57.157 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:57.157 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:57.157 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:57.157 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.157 [ 00:12:57.157 { 00:12:57.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.157 "subtype": "Discovery", 00:12:57.157 "listen_addresses": [], 00:12:57.157 "allow_any_host": true, 00:12:57.157 "hosts": [] 00:12:57.157 }, 00:12:57.157 { 00:12:57.157 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.157 "subtype": "NVMe", 00:12:57.157 "listen_addresses": [ 00:12:57.157 { 00:12:57.157 "trtype": "VFIOUSER", 00:12:57.157 "adrfam": "IPv4", 00:12:57.157 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.157 "trsvcid": "0" 00:12:57.157 } 00:12:57.157 ], 00:12:57.157 "allow_any_host": true, 00:12:57.157 "hosts": [], 00:12:57.157 "serial_number": "SPDK1", 00:12:57.157 "model_number": "SPDK bdev Controller", 00:12:57.157 "max_namespaces": 32, 00:12:57.157 "min_cntlid": 1, 00:12:57.157 "max_cntlid": 65519, 00:12:57.157 "namespaces": [ 00:12:57.157 { 00:12:57.157 "nsid": 1, 00:12:57.157 "bdev_name": "Malloc1", 00:12:57.157 "name": "Malloc1", 00:12:57.157 "nguid": "64C7E7423D2F4B7FAD727410806C68C5", 00:12:57.157 "uuid": "64c7e742-3d2f-4b7f-ad72-7410806c68c5" 00:12:57.157 }, 00:12:57.157 { 00:12:57.157 "nsid": 2, 00:12:57.157 "bdev_name": "Malloc3", 00:12:57.157 "name": "Malloc3", 00:12:57.157 "nguid": "4B3E9561506342A7A836CB1DB0637432", 00:12:57.157 "uuid": "4b3e9561-5063-42a7-a836-cb1db0637432" 00:12:57.157 } 00:12:57.157 ] 00:12:57.157 }, 00:12:57.157 { 00:12:57.157 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.157 "subtype": "NVMe", 00:12:57.157 "listen_addresses": [ 00:12:57.157 { 00:12:57.157 "trtype": "VFIOUSER", 00:12:57.157 "adrfam": "IPv4", 00:12:57.157 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.157 "trsvcid": "0" 00:12:57.157 } 00:12:57.157 ], 00:12:57.157 "allow_any_host": true, 00:12:57.157 "hosts": [], 00:12:57.157 "serial_number": "SPDK2", 00:12:57.157 "model_number": "SPDK bdev Controller", 00:12:57.157 "max_namespaces": 32, 00:12:57.157 "min_cntlid": 1, 00:12:57.157 "max_cntlid": 65519, 00:12:57.157 "namespaces": [ 00:12:57.157 { 00:12:57.157 "nsid": 1, 00:12:57.157 "bdev_name": "Malloc2", 00:12:57.157 "name": "Malloc2", 00:12:57.158 "nguid": "3F8D02A652834D2FB33FD8C37882DFEA", 00:12:57.158 "uuid": "3f8d02a6-5283-4d2f-b33f-d8c37882dfea" 00:12:57.158 } 00:12:57.158 ] 00:12:57.158 } 00:12:57.158 ] 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2057660 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:57.158 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:57.158 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.417 [2024-05-15 12:14:25.715612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.417 Malloc4 00:12:57.417 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:57.417 [2024-05-15 12:14:25.909058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.417 12:14:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.677 Asynchronous Event Request test 00:12:57.677 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.677 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.677 Registering asynchronous event callbacks... 00:12:57.677 Starting namespace attribute notice tests for all controllers... 00:12:57.677 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:57.677 aer_cb - Changed Namespace 00:12:57.677 Cleaning up... 00:12:57.677 [ 00:12:57.677 { 00:12:57.677 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.677 "subtype": "Discovery", 00:12:57.677 "listen_addresses": [], 00:12:57.677 "allow_any_host": true, 00:12:57.677 "hosts": [] 00:12:57.677 }, 00:12:57.677 { 00:12:57.677 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.677 "subtype": "NVMe", 00:12:57.677 "listen_addresses": [ 00:12:57.677 { 00:12:57.677 "trtype": "VFIOUSER", 00:12:57.677 "adrfam": "IPv4", 00:12:57.677 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.677 "trsvcid": "0" 00:12:57.677 } 00:12:57.677 ], 00:12:57.677 "allow_any_host": true, 00:12:57.677 "hosts": [], 00:12:57.677 "serial_number": "SPDK1", 00:12:57.677 "model_number": "SPDK bdev Controller", 00:12:57.677 "max_namespaces": 32, 00:12:57.677 "min_cntlid": 1, 00:12:57.677 "max_cntlid": 65519, 00:12:57.677 "namespaces": [ 00:12:57.677 { 00:12:57.677 "nsid": 1, 00:12:57.677 "bdev_name": "Malloc1", 00:12:57.677 "name": "Malloc1", 00:12:57.677 "nguid": "64C7E7423D2F4B7FAD727410806C68C5", 00:12:57.677 "uuid": "64c7e742-3d2f-4b7f-ad72-7410806c68c5" 00:12:57.677 }, 00:12:57.677 { 00:12:57.677 "nsid": 2, 00:12:57.677 "bdev_name": "Malloc3", 00:12:57.677 "name": "Malloc3", 00:12:57.677 "nguid": "4B3E9561506342A7A836CB1DB0637432", 00:12:57.677 "uuid": "4b3e9561-5063-42a7-a836-cb1db0637432" 00:12:57.677 } 00:12:57.677 ] 00:12:57.677 }, 00:12:57.677 { 00:12:57.677 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.677 "subtype": "NVMe", 00:12:57.677 "listen_addresses": [ 00:12:57.677 { 00:12:57.677 "trtype": "VFIOUSER", 00:12:57.677 "adrfam": "IPv4", 00:12:57.677 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.677 "trsvcid": "0" 00:12:57.677 } 00:12:57.677 ], 00:12:57.677 "allow_any_host": true, 00:12:57.677 "hosts": [], 00:12:57.677 "serial_number": "SPDK2", 00:12:57.677 "model_number": "SPDK bdev Controller", 00:12:57.677 "max_namespaces": 32, 00:12:57.677 "min_cntlid": 1, 00:12:57.677 "max_cntlid": 65519, 00:12:57.677 "namespaces": [ 00:12:57.677 { 00:12:57.677 "nsid": 1, 00:12:57.677 "bdev_name": "Malloc2", 00:12:57.677 "name": "Malloc2", 00:12:57.677 "nguid": "3F8D02A652834D2FB33FD8C37882DFEA", 00:12:57.677 "uuid": "3f8d02a6-5283-4d2f-b33f-d8c37882dfea" 00:12:57.677 }, 00:12:57.677 { 00:12:57.677 "nsid": 2, 00:12:57.677 "bdev_name": "Malloc4", 00:12:57.677 "name": "Malloc4", 00:12:57.677 "nguid": "BA257F14811A4370BF77DAC14289680F", 00:12:57.677 "uuid": "ba257f14-811a-4370-bf77-dac14289680f" 00:12:57.677 } 00:12:57.677 ] 00:12:57.677 } 00:12:57.677 ] 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2057660 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2049489 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 2049489 ']' 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 2049489 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2049489 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2049489' 00:12:57.677 killing process with pid 2049489 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 2049489 00:12:57.677 [2024-05-15 12:14:26.171259] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:57.677 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 2049489 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2057789 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2057789' 00:12:57.937 Process pid: 2057789 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2057789 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 2057789 ']' 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:57.937 12:14:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:58.197 [2024-05-15 12:14:26.503869] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:58.197 [2024-05-15 12:14:26.504798] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:12:58.197 [2024-05-15 12:14:26.504838] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.197 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.197 [2024-05-15 12:14:26.573543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.197 [2024-05-15 12:14:26.637824] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.197 [2024-05-15 12:14:26.637864] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.197 [2024-05-15 12:14:26.637873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.197 [2024-05-15 12:14:26.637882] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.197 [2024-05-15 12:14:26.637889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.197 [2024-05-15 12:14:26.637942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.198 [2024-05-15 12:14:26.638036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.198 [2024-05-15 12:14:26.638061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.198 [2024-05-15 12:14:26.638062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.198 [2024-05-15 12:14:26.716552] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:58.198 [2024-05-15 12:14:26.716654] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:58.198 [2024-05-15 12:14:26.716902] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:58.198 [2024-05-15 12:14:26.717251] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:58.198 [2024-05-15 12:14:26.717505] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:59.136 12:14:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:59.136 12:14:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:12:59.136 12:14:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:00.074 12:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:00.074 12:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:00.074 12:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:00.074 12:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.074 12:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:00.074 12:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:00.333 Malloc1 00:13:00.333 12:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:00.592 12:14:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:00.593 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:00.852 [2024-05-15 12:14:29.198532] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:00.852 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.852 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:00.852 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.111 Malloc2 00:13:01.111 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:01.111 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:01.370 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2057789 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 2057789 ']' 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 2057789 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2057789 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:01.629 12:14:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:01.629 12:14:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2057789' 00:13:01.629 killing process with pid 2057789 00:13:01.629 12:14:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 2057789 00:13:01.629 [2024-05-15 12:14:30.001976] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:01.629 12:14:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 2057789 00:13:01.898 12:14:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:01.898 12:14:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:01.898 00:13:01.898 real 0m52.308s 00:13:01.898 user 3m25.835s 00:13:01.898 sys 0m4.769s 00:13:01.898 12:14:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:01.898 12:14:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:01.898 ************************************ 00:13:01.898 END TEST nvmf_vfio_user 00:13:01.898 ************************************ 00:13:01.898 12:14:30 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:01.898 12:14:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:01.898 12:14:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:01.898 12:14:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:01.898 ************************************ 00:13:01.899 START TEST nvmf_vfio_user_nvme_compliance 00:13:01.899 ************************************ 00:13:01.899 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:02.162 * Looking for test storage... 00:13:02.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2058653 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2058653' 00:13:02.162 Process pid: 2058653 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2058653 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # '[' -z 2058653 ']' 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:02.162 12:14:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:02.162 [2024-05-15 12:14:30.518457] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:13:02.162 [2024-05-15 12:14:30.518505] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.162 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.162 [2024-05-15 12:14:30.587734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.162 [2024-05-15 12:14:30.661805] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.162 [2024-05-15 12:14:30.661842] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.162 [2024-05-15 12:14:30.661851] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.162 [2024-05-15 12:14:30.661860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.162 [2024-05-15 12:14:30.661868] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.162 [2024-05-15 12:14:30.661917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.162 [2024-05-15 12:14:30.661939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.162 [2024-05-15 12:14:30.661942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.100 12:14:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:03.100 12:14:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@861 -- # return 0 00:13:03.100 12:14:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.038 malloc0 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.038 [2024-05-15 12:14:32.395325] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.038 12:14:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:04.038 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.038 00:13:04.038 00:13:04.038 CUnit - A unit testing framework for C - Version 2.1-3 00:13:04.038 http://cunit.sourceforge.net/ 00:13:04.038 00:13:04.038 00:13:04.038 Suite: nvme_compliance 00:13:04.038 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 12:14:32.564636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.038 [2024-05-15 12:14:32.565979] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:04.038 [2024-05-15 12:14:32.565994] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:04.038 [2024-05-15 12:14:32.566002] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:04.298 [2024-05-15 12:14:32.568666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.298 passed 00:13:04.298 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 12:14:32.646207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.298 [2024-05-15 12:14:32.649224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.298 passed 00:13:04.298 Test: admin_identify_ns ...[2024-05-15 12:14:32.728245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.298 [2024-05-15 12:14:32.789204] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:04.298 [2024-05-15 12:14:32.797209] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:04.298 [2024-05-15 12:14:32.818295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.557 passed 00:13:04.557 Test: admin_get_features_mandatory_features ...[2024-05-15 12:14:32.893745] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.557 [2024-05-15 12:14:32.896764] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.557 passed 00:13:04.557 Test: admin_get_features_optional_features ...[2024-05-15 12:14:32.970262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.557 [2024-05-15 12:14:32.973288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.557 passed 00:13:04.557 Test: admin_set_features_number_of_queues ...[2024-05-15 12:14:33.048345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.816 [2024-05-15 12:14:33.154290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.816 passed 00:13:04.816 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 12:14:33.228400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.816 [2024-05-15 12:14:33.231422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.816 passed 00:13:04.816 Test: admin_get_log_page_with_lpo ...[2024-05-15 12:14:33.307843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.076 [2024-05-15 12:14:33.376203] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:05.076 [2024-05-15 12:14:33.389251] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.076 passed 00:13:05.076 Test: fabric_property_get ...[2024-05-15 12:14:33.461685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.076 [2024-05-15 12:14:33.462909] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:05.076 [2024-05-15 12:14:33.464708] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.076 passed 00:13:05.076 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 12:14:33.542209] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.076 [2024-05-15 12:14:33.543437] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:05.076 [2024-05-15 12:14:33.545225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.076 passed 00:13:05.335 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 12:14:33.620359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.335 [2024-05-15 12:14:33.705201] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:05.335 [2024-05-15 12:14:33.721200] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:05.335 [2024-05-15 12:14:33.726282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.335 passed 00:13:05.335 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 12:14:33.802628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.336 [2024-05-15 12:14:33.803859] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:05.336 [2024-05-15 12:14:33.805651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.336 passed 00:13:05.595 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 12:14:33.881271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.595 [2024-05-15 12:14:33.959200] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:05.595 [2024-05-15 12:14:33.983198] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:05.595 [2024-05-15 12:14:33.988286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.595 passed 00:13:05.595 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 12:14:34.063750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.595 [2024-05-15 12:14:34.064960] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:05.595 [2024-05-15 12:14:34.064989] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:05.595 [2024-05-15 12:14:34.066770] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.595 passed 00:13:05.854 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 12:14:34.140312] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.854 [2024-05-15 12:14:34.233198] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:05.854 [2024-05-15 12:14:34.241202] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:05.854 [2024-05-15 12:14:34.249203] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:05.854 [2024-05-15 12:14:34.257202] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:05.854 [2024-05-15 12:14:34.286296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.854 passed 00:13:05.854 Test: admin_create_io_sq_verify_pc ...[2024-05-15 12:14:34.360917] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.854 [2024-05-15 12:14:34.377205] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:06.112 [2024-05-15 12:14:34.395093] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.112 passed 00:13:06.112 Test: admin_create_io_qp_max_qps ...[2024-05-15 12:14:34.469583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.049 [2024-05-15 12:14:35.565205] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:07.618 [2024-05-15 12:14:35.936093] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.618 passed 00:13:07.618 Test: admin_create_io_sq_shared_cq ...[2024-05-15 12:14:36.013669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.618 [2024-05-15 12:14:36.146204] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:07.877 [2024-05-15 12:14:36.183270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.877 passed 00:13:07.877 00:13:07.877 Run Summary: Type Total Ran Passed Failed Inactive 00:13:07.877 suites 1 1 n/a 0 0 00:13:07.877 tests 18 18 18 0 0 00:13:07.877 asserts 360 360 360 0 n/a 00:13:07.877 00:13:07.877 Elapsed time = 1.485 seconds 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2058653 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' -z 2058653 ']' 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # kill -0 2058653 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # uname 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2058653 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2058653' 00:13:07.877 killing process with pid 2058653 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # kill 2058653 00:13:07.877 [2024-05-15 12:14:36.281870] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:07.877 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # wait 2058653 00:13:08.138 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:08.138 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:08.138 00:13:08.138 real 0m6.165s 00:13:08.138 user 0m17.332s 00:13:08.138 sys 0m0.696s 00:13:08.138 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:08.138 12:14:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.138 ************************************ 00:13:08.138 END TEST nvmf_vfio_user_nvme_compliance 00:13:08.138 ************************************ 00:13:08.138 12:14:36 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:08.138 12:14:36 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:08.138 12:14:36 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:08.138 12:14:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.138 ************************************ 00:13:08.138 START TEST nvmf_vfio_user_fuzz 00:13:08.138 ************************************ 00:13:08.138 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:08.400 * Looking for test storage... 00:13:08.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2059776 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2059776' 00:13:08.400 Process pid: 2059776 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2059776 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # '[' -z 2059776 ']' 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:08.400 12:14:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:09.381 12:14:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:09.381 12:14:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@861 -- # return 0 00:13:09.381 12:14:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 malloc0 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:10.320 12:14:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:42.413 Fuzzing completed. Shutting down the fuzz application 00:13:42.413 00:13:42.413 Dumping successful admin opcodes: 00:13:42.413 8, 9, 10, 24, 00:13:42.413 Dumping successful io opcodes: 00:13:42.413 0, 00:13:42.413 NS: 0x200003a1ef00 I/O qp, Total commands completed: 911184, total successful commands: 3556, random_seed: 2833880640 00:13:42.413 NS: 0x200003a1ef00 admin qp, Total commands completed: 222463, total successful commands: 1788, random_seed: 2900978368 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2059776 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' -z 2059776 ']' 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # kill -0 2059776 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # uname 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2059776 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2059776' 00:13:42.413 killing process with pid 2059776 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # kill 2059776 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # wait 2059776 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:42.413 00:13:42.413 real 0m32.860s 00:13:42.413 user 0m31.210s 00:13:42.413 sys 0m29.009s 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:42.413 12:15:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:42.413 ************************************ 00:13:42.413 END TEST nvmf_vfio_user_fuzz 00:13:42.413 ************************************ 00:13:42.413 12:15:09 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:42.413 12:15:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:42.413 12:15:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:42.413 12:15:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.413 ************************************ 00:13:42.413 START TEST nvmf_host_management 00:13:42.413 ************************************ 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:42.413 * Looking for test storage... 00:13:42.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.413 12:15:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.414 12:15:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:47.694 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:47.694 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:47.694 Found net devices under 0000:af:00.0: cvl_0_0 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:47.694 Found net devices under 0000:af:00.1: cvl_0_1 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.694 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:13:47.952 00:13:47.952 --- 10.0.0.2 ping statistics --- 00:13:47.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.952 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:13:47.952 00:13:47.952 --- 10.0.0.1 ping statistics --- 00:13:47.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.952 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.952 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.953 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2069056 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2069056 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2069056 ']' 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.211 12:15:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:48.212 12:15:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.212 12:15:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:48.212 12:15:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:48.212 [2024-05-15 12:15:16.558129] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:13:48.212 [2024-05-15 12:15:16.558177] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.212 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.212 [2024-05-15 12:15:16.632097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.212 [2024-05-15 12:15:16.705416] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.212 [2024-05-15 12:15:16.705454] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.212 [2024-05-15 12:15:16.705463] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.212 [2024-05-15 12:15:16.705471] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.212 [2024-05-15 12:15:16.705494] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.212 [2024-05-15 12:15:16.705594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.212 [2024-05-15 12:15:16.705688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.212 [2024-05-15 12:15:16.705724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.212 [2024-05-15 12:15:16.705725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.150 [2024-05-15 12:15:17.410997] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.150 Malloc0 00:13:49.150 [2024-05-15 12:15:17.477574] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:49.150 [2024-05-15 12:15:17.477836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2069346 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2069346 /var/tmp/bdevperf.sock 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2069346 ']' 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:49.150 { 00:13:49.150 "params": { 00:13:49.150 "name": "Nvme$subsystem", 00:13:49.150 "trtype": "$TEST_TRANSPORT", 00:13:49.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:49.150 "adrfam": "ipv4", 00:13:49.150 "trsvcid": "$NVMF_PORT", 00:13:49.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:49.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:49.150 "hdgst": ${hdgst:-false}, 00:13:49.150 "ddgst": ${ddgst:-false} 00:13:49.150 }, 00:13:49.150 "method": "bdev_nvme_attach_controller" 00:13:49.150 } 00:13:49.150 EOF 00:13:49.150 )") 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:49.150 12:15:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:49.150 "params": { 00:13:49.150 "name": "Nvme0", 00:13:49.150 "trtype": "tcp", 00:13:49.150 "traddr": "10.0.0.2", 00:13:49.150 "adrfam": "ipv4", 00:13:49.150 "trsvcid": "4420", 00:13:49.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:49.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:49.150 "hdgst": false, 00:13:49.150 "ddgst": false 00:13:49.150 }, 00:13:49.151 "method": "bdev_nvme_attach_controller" 00:13:49.151 }' 00:13:49.151 [2024-05-15 12:15:17.584734] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:13:49.151 [2024-05-15 12:15:17.584781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069346 ] 00:13:49.151 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.151 [2024-05-15 12:15:17.655574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.409 [2024-05-15 12:15:17.729292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.669 Running I/O for 10 seconds... 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.928 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.189 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:13:50.189 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:13:50.189 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:50.189 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:50.189 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:50.189 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:50.189 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.189 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.189 [2024-05-15 12:15:18.473170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd72f0 is same with the state(5) to be set 00:13:50.189 [2024-05-15 12:15:18.473243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd72f0 is same with the state(5) to be set 00:13:50.189 [2024-05-15 12:15:18.473768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.473988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.473997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.474007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.474017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.474028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.474037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.189 [2024-05-15 12:15:18.474048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.189 [2024-05-15 12:15:18.474057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.190 [2024-05-15 12:15:18.474889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.190 [2024-05-15 12:15:18.474899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.474908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.474920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.474929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.474940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.474949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.474960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.474970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.474981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.474990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.475001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.475010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.475021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.475030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.475040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.475050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.475060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.475070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.475080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.475089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.475100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.191 [2024-05-15 12:15:18.475109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.475174] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28a0ad0 was disconnected and freed. reset controller. 00:13:50.191 [2024-05-15 12:15:18.476040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:50.191 task offset: 66816 on job bdev=Nvme0n1 fails 00:13:50.191 00:13:50.191 Latency(us) 00:13:50.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.191 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:50.191 Job: Nvme0n1 ended in about 0.43 seconds with error 00:13:50.191 Verification LBA range: start 0x0 length 0x400 00:13:50.191 Nvme0n1 : 0.43 1185.13 74.07 148.14 0.00 46949.10 1926.76 54525.95 00:13:50.191 =================================================================================================================== 00:13:50.191 Total : 1185.13 74.07 148.14 0.00 46949.10 1926.76 54525.95 00:13:50.191 [2024-05-15 12:15:18.477586] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:50.191 [2024-05-15 12:15:18.477604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248f9f0 (9): Bad file descriptor 00:13:50.191 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.191 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:50.191 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.191 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.191 [2024-05-15 12:15:18.480682] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:50.191 [2024-05-15 12:15:18.480977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:50.191 [2024-05-15 12:15:18.481003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.191 [2024-05-15 12:15:18.481021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:50.191 [2024-05-15 12:15:18.481031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:50.191 [2024-05-15 12:15:18.481041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:50.191 [2024-05-15 12:15:18.481051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x248f9f0 00:13:50.191 [2024-05-15 12:15:18.481072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248f9f0 (9): Bad file descriptor 00:13:50.191 [2024-05-15 12:15:18.481087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:50.191 [2024-05-15 12:15:18.481097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:50.191 [2024-05-15 12:15:18.481107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:50.191 [2024-05-15 12:15:18.481122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:50.191 12:15:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.191 12:15:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2069346 00:13:51.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2069346) - No such process 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:51.128 { 00:13:51.128 "params": { 00:13:51.128 "name": "Nvme$subsystem", 00:13:51.128 "trtype": "$TEST_TRANSPORT", 00:13:51.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:51.128 "adrfam": "ipv4", 00:13:51.128 "trsvcid": "$NVMF_PORT", 00:13:51.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:51.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:51.128 "hdgst": ${hdgst:-false}, 00:13:51.128 "ddgst": ${ddgst:-false} 00:13:51.128 }, 00:13:51.128 "method": "bdev_nvme_attach_controller" 00:13:51.128 } 00:13:51.128 EOF 00:13:51.128 )") 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:51.128 12:15:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:51.128 "params": { 00:13:51.128 "name": "Nvme0", 00:13:51.128 "trtype": "tcp", 00:13:51.128 "traddr": "10.0.0.2", 00:13:51.128 "adrfam": "ipv4", 00:13:51.128 "trsvcid": "4420", 00:13:51.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:51.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:51.128 "hdgst": false, 00:13:51.129 "ddgst": false 00:13:51.129 }, 00:13:51.129 "method": "bdev_nvme_attach_controller" 00:13:51.129 }' 00:13:51.129 [2024-05-15 12:15:19.548745] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:13:51.129 [2024-05-15 12:15:19.548797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069633 ] 00:13:51.129 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.129 [2024-05-15 12:15:19.619990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.388 [2024-05-15 12:15:19.686940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.650 Running I/O for 1 seconds... 00:13:52.637 00:13:52.637 Latency(us) 00:13:52.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.637 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:52.637 Verification LBA range: start 0x0 length 0x400 00:13:52.637 Nvme0n1 : 1.05 1040.42 65.03 0.00 0.00 60828.25 10905.19 53267.66 00:13:52.637 =================================================================================================================== 00:13:52.637 Total : 1040.42 65.03 0.00 0.00 60828.25 10905.19 53267.66 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.897 rmmod nvme_tcp 00:13:52.897 rmmod nvme_fabrics 00:13:52.897 rmmod nvme_keyring 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2069056 ']' 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2069056 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 2069056 ']' 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 2069056 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2069056 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2069056' 00:13:52.897 killing process with pid 2069056 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 2069056 00:13:52.897 [2024-05-15 12:15:21.344150] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:52.897 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 2069056 00:13:53.157 [2024-05-15 12:15:21.542554] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:53.157 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.157 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.157 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.157 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.157 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.157 12:15:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.157 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.157 12:15:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.695 12:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:55.695 12:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:55.695 00:13:55.695 real 0m14.123s 00:13:55.695 user 0m23.649s 00:13:55.695 sys 0m6.548s 00:13:55.695 12:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:55.695 12:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 ************************************ 00:13:55.695 END TEST nvmf_host_management 00:13:55.695 ************************************ 00:13:55.695 12:15:23 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:55.695 12:15:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:55.695 12:15:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:55.695 12:15:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.695 ************************************ 00:13:55.695 START TEST nvmf_lvol 00:13:55.695 ************************************ 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:55.695 * Looking for test storage... 00:13:55.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.695 12:15:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.269 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:02.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:02.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:02.270 Found net devices under 0000:af:00.0: cvl_0_0 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:02.270 Found net devices under 0000:af:00.1: cvl_0_1 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:14:02.270 00:14:02.270 --- 10.0.0.2 ping statistics --- 00:14:02.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.270 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:14:02.270 00:14:02.270 --- 10.0.0.1 ping statistics --- 00:14:02.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.270 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2073709 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2073709 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 2073709 ']' 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:02.270 12:15:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:02.530 [2024-05-15 12:15:30.816298] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:14:02.530 [2024-05-15 12:15:30.816350] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.530 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.530 [2024-05-15 12:15:30.890997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.530 [2024-05-15 12:15:30.959877] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.530 [2024-05-15 12:15:30.959918] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.530 [2024-05-15 12:15:30.959928] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.530 [2024-05-15 12:15:30.959936] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.530 [2024-05-15 12:15:30.959959] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.530 [2024-05-15 12:15:30.960012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.530 [2024-05-15 12:15:30.960105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.530 [2024-05-15 12:15:30.960107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.099 12:15:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:03.099 12:15:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:14:03.099 12:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.099 12:15:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:03.099 12:15:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:03.359 12:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.359 12:15:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:03.359 [2024-05-15 12:15:31.800900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.359 12:15:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:03.618 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:03.618 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:03.878 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:03.878 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:03.878 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:04.138 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f185d77b-b55b-4747-9f3d-a82f017dcfbb 00:14:04.138 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f185d77b-b55b-4747-9f3d-a82f017dcfbb lvol 20 00:14:04.398 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a0251623-9d0f-4089-99af-3af57a9bea62 00:14:04.398 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:04.658 12:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a0251623-9d0f-4089-99af-3af57a9bea62 00:14:04.658 12:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:04.916 [2024-05-15 12:15:33.242170] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:04.916 [2024-05-15 12:15:33.242445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.916 12:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:04.916 12:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:04.916 12:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2074169 00:14:04.916 12:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:05.175 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.114 12:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a0251623-9d0f-4089-99af-3af57a9bea62 MY_SNAPSHOT 00:14:06.373 12:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2825eb00-bf72-4b5a-a8ed-22a36798b618 00:14:06.373 12:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a0251623-9d0f-4089-99af-3af57a9bea62 30 00:14:06.373 12:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2825eb00-bf72-4b5a-a8ed-22a36798b618 MY_CLONE 00:14:06.633 12:15:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1daa486e-ec3a-4fe2-badc-e6949b94126c 00:14:06.633 12:15:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1daa486e-ec3a-4fe2-badc-e6949b94126c 00:14:07.201 12:15:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2074169 00:14:15.362 Initializing NVMe Controllers 00:14:15.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:15.362 Controller IO queue size 128, less than required. 00:14:15.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:15.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:15.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:15.362 Initialization complete. Launching workers. 00:14:15.362 ======================================================== 00:14:15.362 Latency(us) 00:14:15.362 Device Information : IOPS MiB/s Average min max 00:14:15.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12654.80 49.43 10118.60 1578.31 81813.00 00:14:15.362 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12568.70 49.10 10185.15 3722.87 44975.25 00:14:15.362 ======================================================== 00:14:15.362 Total : 25223.49 98.53 10151.76 1578.31 81813.00 00:14:15.362 00:14:15.362 12:15:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:15.647 12:15:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0251623-9d0f-4089-99af-3af57a9bea62 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f185d77b-b55b-4747-9f3d-a82f017dcfbb 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.905 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.905 rmmod nvme_tcp 00:14:15.905 rmmod nvme_fabrics 00:14:16.162 rmmod nvme_keyring 00:14:16.162 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.162 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2073709 ']' 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2073709 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 2073709 ']' 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 2073709 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2073709 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2073709' 00:14:16.163 killing process with pid 2073709 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 2073709 00:14:16.163 [2024-05-15 12:15:44.531322] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:16.163 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 2073709 00:14:16.420 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.420 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:16.420 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:16.420 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.420 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:16.420 12:15:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.420 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.420 12:15:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.324 12:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.584 00:14:18.584 real 0m23.116s 00:14:18.584 user 1m2.056s 00:14:18.584 sys 0m10.031s 00:14:18.584 12:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:18.584 12:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:18.584 ************************************ 00:14:18.584 END TEST nvmf_lvol 00:14:18.584 ************************************ 00:14:18.584 12:15:46 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:18.584 12:15:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:18.584 12:15:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:18.584 12:15:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:18.584 ************************************ 00:14:18.584 START TEST nvmf_lvs_grow 00:14:18.584 ************************************ 00:14:18.584 12:15:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:18.584 * Looking for test storage... 00:14:18.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.584 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.585 12:15:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:25.157 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:25.157 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:25.157 Found net devices under 0000:af:00.0: cvl_0_0 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:25.157 Found net devices under 0000:af:00.1: cvl_0_1 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.157 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.158 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:14:25.417 00:14:25.417 --- 10.0.0.2 ping statistics --- 00:14:25.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.417 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:14:25.417 00:14:25.417 --- 10.0.0.1 ping statistics --- 00:14:25.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.417 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2079727 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2079727 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 2079727 ']' 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:25.417 12:15:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:25.417 [2024-05-15 12:15:53.889746] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:14:25.417 [2024-05-15 12:15:53.889801] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.417 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.676 [2024-05-15 12:15:53.964755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.676 [2024-05-15 12:15:54.038176] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.676 [2024-05-15 12:15:54.038217] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.676 [2024-05-15 12:15:54.038227] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.676 [2024-05-15 12:15:54.038235] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.676 [2024-05-15 12:15:54.038258] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.676 [2024-05-15 12:15:54.038278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.245 12:15:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:26.245 12:15:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:14:26.245 12:15:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.245 12:15:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:26.245 12:15:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:26.245 12:15:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.245 12:15:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:26.505 [2024-05-15 12:15:54.880651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:26.505 ************************************ 00:14:26.505 START TEST lvs_grow_clean 00:14:26.505 ************************************ 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:26.505 12:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:26.765 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:26.765 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:27.025 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:27.025 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:27.025 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:27.025 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:27.025 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:27.025 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 079954dd-02e8-45e3-98d7-73ae3b24df32 lvol 150 00:14:27.285 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ec20e6e4-6a87-45a6-9898-bcacad98ae13 00:14:27.285 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:27.285 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:27.285 [2024-05-15 12:15:55.798874] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:27.285 [2024-05-15 12:15:55.798918] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:27.285 true 00:14:27.545 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:27.545 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:27.545 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:27.545 12:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:27.804 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ec20e6e4-6a87-45a6-9898-bcacad98ae13 00:14:27.804 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:28.065 [2024-05-15 12:15:56.456635] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:28.065 [2024-05-15 12:15:56.456876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.065 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2080298 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2080298 /var/tmp/bdevperf.sock 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 2080298 ']' 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:28.325 12:15:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:28.325 [2024-05-15 12:15:56.668255] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:14:28.325 [2024-05-15 12:15:56.668301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080298 ] 00:14:28.325 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.325 [2024-05-15 12:15:56.737214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.325 [2024-05-15 12:15:56.806003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.264 12:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:29.264 12:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:14:29.264 12:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:29.264 Nvme0n1 00:14:29.264 12:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:29.524 [ 00:14:29.524 { 00:14:29.524 "name": "Nvme0n1", 00:14:29.524 "aliases": [ 00:14:29.524 "ec20e6e4-6a87-45a6-9898-bcacad98ae13" 00:14:29.524 ], 00:14:29.524 "product_name": "NVMe disk", 00:14:29.524 "block_size": 4096, 00:14:29.524 "num_blocks": 38912, 00:14:29.524 "uuid": "ec20e6e4-6a87-45a6-9898-bcacad98ae13", 00:14:29.524 "assigned_rate_limits": { 00:14:29.524 "rw_ios_per_sec": 0, 00:14:29.524 "rw_mbytes_per_sec": 0, 00:14:29.524 "r_mbytes_per_sec": 0, 00:14:29.524 "w_mbytes_per_sec": 0 00:14:29.524 }, 00:14:29.524 "claimed": false, 00:14:29.524 "zoned": false, 00:14:29.524 "supported_io_types": { 00:14:29.524 "read": true, 00:14:29.524 "write": true, 00:14:29.524 "unmap": true, 00:14:29.524 "write_zeroes": true, 00:14:29.524 "flush": true, 00:14:29.524 "reset": true, 00:14:29.524 "compare": true, 00:14:29.524 "compare_and_write": true, 00:14:29.524 "abort": true, 00:14:29.524 "nvme_admin": true, 00:14:29.524 "nvme_io": true 00:14:29.524 }, 00:14:29.524 "memory_domains": [ 00:14:29.524 { 00:14:29.524 "dma_device_id": "system", 00:14:29.524 "dma_device_type": 1 00:14:29.524 } 00:14:29.524 ], 00:14:29.524 "driver_specific": { 00:14:29.524 "nvme": [ 00:14:29.524 { 00:14:29.524 "trid": { 00:14:29.524 "trtype": "TCP", 00:14:29.524 "adrfam": "IPv4", 00:14:29.524 "traddr": "10.0.0.2", 00:14:29.524 "trsvcid": "4420", 00:14:29.524 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:29.524 }, 00:14:29.524 "ctrlr_data": { 00:14:29.524 "cntlid": 1, 00:14:29.524 "vendor_id": "0x8086", 00:14:29.524 "model_number": "SPDK bdev Controller", 00:14:29.524 "serial_number": "SPDK0", 00:14:29.524 "firmware_revision": "24.05", 00:14:29.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:29.524 "oacs": { 00:14:29.524 "security": 0, 00:14:29.524 "format": 0, 00:14:29.524 "firmware": 0, 00:14:29.524 "ns_manage": 0 00:14:29.524 }, 00:14:29.524 "multi_ctrlr": true, 00:14:29.524 "ana_reporting": false 00:14:29.524 }, 00:14:29.524 "vs": { 00:14:29.524 "nvme_version": "1.3" 00:14:29.524 }, 00:14:29.524 "ns_data": { 00:14:29.524 "id": 1, 00:14:29.524 "can_share": true 00:14:29.524 } 00:14:29.524 } 00:14:29.524 ], 00:14:29.524 "mp_policy": "active_passive" 00:14:29.524 } 00:14:29.524 } 00:14:29.524 ] 00:14:29.524 12:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:29.524 12:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2080571 00:14:29.524 12:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:29.524 Running I/O for 10 seconds... 00:14:30.464 Latency(us) 00:14:30.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.464 Nvme0n1 : 1.00 23380.00 91.33 0.00 0.00 0.00 0.00 0.00 00:14:30.464 =================================================================================================================== 00:14:30.464 Total : 23380.00 91.33 0.00 0.00 0.00 0.00 0.00 00:14:30.464 00:14:31.403 12:15:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:31.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.662 Nvme0n1 : 2.00 23777.00 92.88 0.00 0.00 0.00 0.00 0.00 00:14:31.662 =================================================================================================================== 00:14:31.662 Total : 23777.00 92.88 0.00 0.00 0.00 0.00 0.00 00:14:31.662 00:14:31.662 true 00:14:31.662 12:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:31.662 12:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:31.920 12:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:31.920 12:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:31.920 12:16:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2080571 00:14:32.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.488 Nvme0n1 : 3.00 23688.33 92.53 0.00 0.00 0.00 0.00 0.00 00:14:32.488 =================================================================================================================== 00:14:32.488 Total : 23688.33 92.53 0.00 0.00 0.00 0.00 0.00 00:14:32.488 00:14:33.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.905 Nvme0n1 : 4.00 23830.25 93.09 0.00 0.00 0.00 0.00 0.00 00:14:33.905 =================================================================================================================== 00:14:33.905 Total : 23830.25 93.09 0.00 0.00 0.00 0.00 0.00 00:14:33.905 00:14:34.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.473 Nvme0n1 : 5.00 23928.20 93.47 0.00 0.00 0.00 0.00 0.00 00:14:34.473 =================================================================================================================== 00:14:34.473 Total : 23928.20 93.47 0.00 0.00 0.00 0.00 0.00 00:14:34.473 00:14:35.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.852 Nvme0n1 : 6.00 23993.50 93.72 0.00 0.00 0.00 0.00 0.00 00:14:35.852 =================================================================================================================== 00:14:35.852 Total : 23993.50 93.72 0.00 0.00 0.00 0.00 0.00 00:14:35.852 00:14:36.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.791 Nvme0n1 : 7.00 24049.14 93.94 0.00 0.00 0.00 0.00 0.00 00:14:36.791 =================================================================================================================== 00:14:36.791 Total : 24049.14 93.94 0.00 0.00 0.00 0.00 0.00 00:14:36.791 00:14:37.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.729 Nvme0n1 : 8.00 24091.12 94.11 0.00 0.00 0.00 0.00 0.00 00:14:37.729 =================================================================================================================== 00:14:37.729 Total : 24091.12 94.11 0.00 0.00 0.00 0.00 0.00 00:14:37.729 00:14:38.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.666 Nvme0n1 : 9.00 24095.11 94.12 0.00 0.00 0.00 0.00 0.00 00:14:38.666 =================================================================================================================== 00:14:38.666 Total : 24095.11 94.12 0.00 0.00 0.00 0.00 0.00 00:14:38.666 00:14:39.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.602 Nvme0n1 : 10.00 24124.10 94.23 0.00 0.00 0.00 0.00 0.00 00:14:39.603 =================================================================================================================== 00:14:39.603 Total : 24124.10 94.23 0.00 0.00 0.00 0.00 0.00 00:14:39.603 00:14:39.603 00:14:39.603 Latency(us) 00:14:39.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.603 Nvme0n1 : 10.01 24124.10 94.23 0.00 0.00 5302.28 3434.09 26633.83 00:14:39.603 =================================================================================================================== 00:14:39.603 Total : 24124.10 94.23 0.00 0.00 5302.28 3434.09 26633.83 00:14:39.603 0 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2080298 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 2080298 ']' 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 2080298 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2080298 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2080298' 00:14:39.603 killing process with pid 2080298 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 2080298 00:14:39.603 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.603 00:14:39.603 Latency(us) 00:14:39.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.603 =================================================================================================================== 00:14:39.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.603 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 2080298 00:14:39.862 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:40.121 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:40.121 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:40.121 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:40.380 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:40.380 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:40.380 12:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:40.639 [2024-05-15 12:16:08.969387] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:40.639 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:40.639 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:14:40.639 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:40.639 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:40.640 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:40.640 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:40.640 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:40.640 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:40.640 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:40.640 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:40.640 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:40.640 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:40.640 request: 00:14:40.640 { 00:14:40.640 "uuid": "079954dd-02e8-45e3-98d7-73ae3b24df32", 00:14:40.640 "method": "bdev_lvol_get_lvstores", 00:14:40.640 "req_id": 1 00:14:40.640 } 00:14:40.640 Got JSON-RPC error response 00:14:40.640 response: 00:14:40.640 { 00:14:40.640 "code": -19, 00:14:40.640 "message": "No such device" 00:14:40.640 } 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:40.899 aio_bdev 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ec20e6e4-6a87-45a6-9898-bcacad98ae13 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=ec20e6e4-6a87-45a6-9898-bcacad98ae13 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:14:40.899 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:41.158 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ec20e6e4-6a87-45a6-9898-bcacad98ae13 -t 2000 00:14:41.158 [ 00:14:41.158 { 00:14:41.158 "name": "ec20e6e4-6a87-45a6-9898-bcacad98ae13", 00:14:41.158 "aliases": [ 00:14:41.158 "lvs/lvol" 00:14:41.158 ], 00:14:41.158 "product_name": "Logical Volume", 00:14:41.158 "block_size": 4096, 00:14:41.158 "num_blocks": 38912, 00:14:41.158 "uuid": "ec20e6e4-6a87-45a6-9898-bcacad98ae13", 00:14:41.158 "assigned_rate_limits": { 00:14:41.158 "rw_ios_per_sec": 0, 00:14:41.158 "rw_mbytes_per_sec": 0, 00:14:41.158 "r_mbytes_per_sec": 0, 00:14:41.158 "w_mbytes_per_sec": 0 00:14:41.158 }, 00:14:41.158 "claimed": false, 00:14:41.158 "zoned": false, 00:14:41.158 "supported_io_types": { 00:14:41.158 "read": true, 00:14:41.158 "write": true, 00:14:41.158 "unmap": true, 00:14:41.158 "write_zeroes": true, 00:14:41.158 "flush": false, 00:14:41.158 "reset": true, 00:14:41.158 "compare": false, 00:14:41.158 "compare_and_write": false, 00:14:41.158 "abort": false, 00:14:41.158 "nvme_admin": false, 00:14:41.158 "nvme_io": false 00:14:41.158 }, 00:14:41.158 "driver_specific": { 00:14:41.158 "lvol": { 00:14:41.158 "lvol_store_uuid": "079954dd-02e8-45e3-98d7-73ae3b24df32", 00:14:41.158 "base_bdev": "aio_bdev", 00:14:41.158 "thin_provision": false, 00:14:41.158 "num_allocated_clusters": 38, 00:14:41.158 "snapshot": false, 00:14:41.158 "clone": false, 00:14:41.158 "esnap_clone": false 00:14:41.158 } 00:14:41.158 } 00:14:41.158 } 00:14:41.158 ] 00:14:41.418 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:14:41.418 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:41.418 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:41.418 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:41.418 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:41.418 12:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:41.677 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:41.677 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ec20e6e4-6a87-45a6-9898-bcacad98ae13 00:14:41.937 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 079954dd-02e8-45e3-98d7-73ae3b24df32 00:14:41.937 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.197 00:14:42.197 real 0m15.644s 00:14:42.197 user 0m14.753s 00:14:42.197 sys 0m2.001s 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:42.197 ************************************ 00:14:42.197 END TEST lvs_grow_clean 00:14:42.197 ************************************ 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:42.197 ************************************ 00:14:42.197 START TEST lvs_grow_dirty 00:14:42.197 ************************************ 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.197 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:42.457 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:42.457 12:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:42.716 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4b4de9be-7642-431c-aa80-9659703775d2 00:14:42.716 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:42.716 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:42.716 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:42.716 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:42.716 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4b4de9be-7642-431c-aa80-9659703775d2 lvol 150 00:14:42.976 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b8a4798a-c249-4f33-b2cb-27a59ca4135d 00:14:42.976 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.976 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:43.236 [2024-05-15 12:16:11.540376] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:43.236 [2024-05-15 12:16:11.540419] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:43.236 true 00:14:43.236 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:43.236 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:43.236 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:43.236 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:43.495 12:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b8a4798a-c249-4f33-b2cb-27a59ca4135d 00:14:43.755 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.755 [2024-05-15 12:16:12.194339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.755 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2083027 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2083027 /var/tmp/bdevperf.sock 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2083027 ']' 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:44.015 12:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:44.015 [2024-05-15 12:16:12.416065] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:14:44.015 [2024-05-15 12:16:12.416115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083027 ] 00:14:44.015 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.015 [2024-05-15 12:16:12.485179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.275 [2024-05-15 12:16:12.560298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.843 12:16:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:44.843 12:16:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:14:44.843 12:16:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:45.101 Nvme0n1 00:14:45.101 12:16:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:45.101 [ 00:14:45.101 { 00:14:45.101 "name": "Nvme0n1", 00:14:45.101 "aliases": [ 00:14:45.101 "b8a4798a-c249-4f33-b2cb-27a59ca4135d" 00:14:45.101 ], 00:14:45.101 "product_name": "NVMe disk", 00:14:45.101 "block_size": 4096, 00:14:45.101 "num_blocks": 38912, 00:14:45.101 "uuid": "b8a4798a-c249-4f33-b2cb-27a59ca4135d", 00:14:45.101 "assigned_rate_limits": { 00:14:45.101 "rw_ios_per_sec": 0, 00:14:45.101 "rw_mbytes_per_sec": 0, 00:14:45.101 "r_mbytes_per_sec": 0, 00:14:45.101 "w_mbytes_per_sec": 0 00:14:45.101 }, 00:14:45.101 "claimed": false, 00:14:45.101 "zoned": false, 00:14:45.101 "supported_io_types": { 00:14:45.101 "read": true, 00:14:45.101 "write": true, 00:14:45.101 "unmap": true, 00:14:45.101 "write_zeroes": true, 00:14:45.101 "flush": true, 00:14:45.101 "reset": true, 00:14:45.101 "compare": true, 00:14:45.101 "compare_and_write": true, 00:14:45.101 "abort": true, 00:14:45.101 "nvme_admin": true, 00:14:45.101 "nvme_io": true 00:14:45.101 }, 00:14:45.101 "memory_domains": [ 00:14:45.101 { 00:14:45.101 "dma_device_id": "system", 00:14:45.101 "dma_device_type": 1 00:14:45.101 } 00:14:45.101 ], 00:14:45.101 "driver_specific": { 00:14:45.101 "nvme": [ 00:14:45.102 { 00:14:45.102 "trid": { 00:14:45.102 "trtype": "TCP", 00:14:45.102 "adrfam": "IPv4", 00:14:45.102 "traddr": "10.0.0.2", 00:14:45.102 "trsvcid": "4420", 00:14:45.102 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:45.102 }, 00:14:45.102 "ctrlr_data": { 00:14:45.102 "cntlid": 1, 00:14:45.102 "vendor_id": "0x8086", 00:14:45.102 "model_number": "SPDK bdev Controller", 00:14:45.102 "serial_number": "SPDK0", 00:14:45.102 "firmware_revision": "24.05", 00:14:45.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:45.102 "oacs": { 00:14:45.102 "security": 0, 00:14:45.102 "format": 0, 00:14:45.102 "firmware": 0, 00:14:45.102 "ns_manage": 0 00:14:45.102 }, 00:14:45.102 "multi_ctrlr": true, 00:14:45.102 "ana_reporting": false 00:14:45.102 }, 00:14:45.102 "vs": { 00:14:45.102 "nvme_version": "1.3" 00:14:45.102 }, 00:14:45.102 "ns_data": { 00:14:45.102 "id": 1, 00:14:45.102 "can_share": true 00:14:45.102 } 00:14:45.102 } 00:14:45.102 ], 00:14:45.102 "mp_policy": "active_passive" 00:14:45.102 } 00:14:45.102 } 00:14:45.102 ] 00:14:45.102 12:16:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2083296 00:14:45.102 12:16:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:45.102 12:16:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:45.360 Running I/O for 10 seconds... 00:14:46.297 Latency(us) 00:14:46.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.297 Nvme0n1 : 1.00 23585.00 92.13 0.00 0.00 0.00 0.00 0.00 00:14:46.297 =================================================================================================================== 00:14:46.297 Total : 23585.00 92.13 0.00 0.00 0.00 0.00 0.00 00:14:46.297 00:14:47.236 12:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:47.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.236 Nvme0n1 : 2.00 23858.00 93.20 0.00 0.00 0.00 0.00 0.00 00:14:47.236 =================================================================================================================== 00:14:47.236 Total : 23858.00 93.20 0.00 0.00 0.00 0.00 0.00 00:14:47.236 00:14:47.513 true 00:14:47.513 12:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:47.513 12:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:47.513 12:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:47.513 12:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:47.513 12:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2083296 00:14:48.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.481 Nvme0n1 : 3.00 23963.67 93.61 0.00 0.00 0.00 0.00 0.00 00:14:48.481 =================================================================================================================== 00:14:48.481 Total : 23963.67 93.61 0.00 0.00 0.00 0.00 0.00 00:14:48.481 00:14:49.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.422 Nvme0n1 : 4.00 24048.75 93.94 0.00 0.00 0.00 0.00 0.00 00:14:49.422 =================================================================================================================== 00:14:49.422 Total : 24048.75 93.94 0.00 0.00 0.00 0.00 0.00 00:14:49.422 00:14:50.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.358 Nvme0n1 : 5.00 24106.20 94.16 0.00 0.00 0.00 0.00 0.00 00:14:50.358 =================================================================================================================== 00:14:50.358 Total : 24106.20 94.16 0.00 0.00 0.00 0.00 0.00 00:14:50.358 00:14:51.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.350 Nvme0n1 : 6.00 24154.83 94.35 0.00 0.00 0.00 0.00 0.00 00:14:51.350 =================================================================================================================== 00:14:51.350 Total : 24154.83 94.35 0.00 0.00 0.00 0.00 0.00 00:14:51.350 00:14:52.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.286 Nvme0n1 : 7.00 24112.29 94.19 0.00 0.00 0.00 0.00 0.00 00:14:52.286 =================================================================================================================== 00:14:52.286 Total : 24112.29 94.19 0.00 0.00 0.00 0.00 0.00 00:14:52.286 00:14:53.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.222 Nvme0n1 : 8.00 24131.62 94.26 0.00 0.00 0.00 0.00 0.00 00:14:53.222 =================================================================================================================== 00:14:53.222 Total : 24131.62 94.26 0.00 0.00 0.00 0.00 0.00 00:14:53.222 00:14:54.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.600 Nvme0n1 : 9.00 24159.67 94.37 0.00 0.00 0.00 0.00 0.00 00:14:54.600 =================================================================================================================== 00:14:54.600 Total : 24159.67 94.37 0.00 0.00 0.00 0.00 0.00 00:14:54.600 00:14:55.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.536 Nvme0n1 : 10.00 24185.50 94.47 0.00 0.00 0.00 0.00 0.00 00:14:55.536 =================================================================================================================== 00:14:55.536 Total : 24185.50 94.47 0.00 0.00 0.00 0.00 0.00 00:14:55.536 00:14:55.536 00:14:55.536 Latency(us) 00:14:55.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.536 Nvme0n1 : 10.01 24188.11 94.48 0.00 0.00 5288.25 2057.83 20342.37 00:14:55.536 =================================================================================================================== 00:14:55.536 Total : 24188.11 94.48 0.00 0.00 5288.25 2057.83 20342.37 00:14:55.536 0 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2083027 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 2083027 ']' 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 2083027 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2083027 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2083027' 00:14:55.536 killing process with pid 2083027 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 2083027 00:14:55.536 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.536 00:14:55.536 Latency(us) 00:14:55.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.536 =================================================================================================================== 00:14:55.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.536 12:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 2083027 00:14:55.536 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.795 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2079727 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2079727 00:14:56.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2079727 Killed "${NVMF_APP[@]}" "$@" 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2085165 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2085165 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2085165 ']' 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:56.054 12:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:56.313 [2024-05-15 12:16:24.629686] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:14:56.313 [2024-05-15 12:16:24.629737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.313 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.313 [2024-05-15 12:16:24.702533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.313 [2024-05-15 12:16:24.769932] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.313 [2024-05-15 12:16:24.769972] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.313 [2024-05-15 12:16:24.769981] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.313 [2024-05-15 12:16:24.769990] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.313 [2024-05-15 12:16:24.769996] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.313 [2024-05-15 12:16:24.770024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.881 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:56.881 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:14:56.881 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.881 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:56.881 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.140 [2024-05-15 12:16:25.606702] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:57.140 [2024-05-15 12:16:25.606784] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:57.140 [2024-05-15 12:16:25.606813] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b8a4798a-c249-4f33-b2cb-27a59ca4135d 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=b8a4798a-c249-4f33-b2cb-27a59ca4135d 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:14:57.140 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:57.399 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b8a4798a-c249-4f33-b2cb-27a59ca4135d -t 2000 00:14:57.658 [ 00:14:57.658 { 00:14:57.658 "name": "b8a4798a-c249-4f33-b2cb-27a59ca4135d", 00:14:57.658 "aliases": [ 00:14:57.658 "lvs/lvol" 00:14:57.658 ], 00:14:57.658 "product_name": "Logical Volume", 00:14:57.658 "block_size": 4096, 00:14:57.658 "num_blocks": 38912, 00:14:57.658 "uuid": "b8a4798a-c249-4f33-b2cb-27a59ca4135d", 00:14:57.658 "assigned_rate_limits": { 00:14:57.658 "rw_ios_per_sec": 0, 00:14:57.658 "rw_mbytes_per_sec": 0, 00:14:57.658 "r_mbytes_per_sec": 0, 00:14:57.658 "w_mbytes_per_sec": 0 00:14:57.658 }, 00:14:57.658 "claimed": false, 00:14:57.658 "zoned": false, 00:14:57.658 "supported_io_types": { 00:14:57.658 "read": true, 00:14:57.658 "write": true, 00:14:57.658 "unmap": true, 00:14:57.658 "write_zeroes": true, 00:14:57.658 "flush": false, 00:14:57.658 "reset": true, 00:14:57.658 "compare": false, 00:14:57.658 "compare_and_write": false, 00:14:57.658 "abort": false, 00:14:57.658 "nvme_admin": false, 00:14:57.658 "nvme_io": false 00:14:57.658 }, 00:14:57.658 "driver_specific": { 00:14:57.658 "lvol": { 00:14:57.658 "lvol_store_uuid": "4b4de9be-7642-431c-aa80-9659703775d2", 00:14:57.658 "base_bdev": "aio_bdev", 00:14:57.658 "thin_provision": false, 00:14:57.658 "num_allocated_clusters": 38, 00:14:57.658 "snapshot": false, 00:14:57.658 "clone": false, 00:14:57.658 "esnap_clone": false 00:14:57.658 } 00:14:57.658 } 00:14:57.658 } 00:14:57.658 ] 00:14:57.658 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:14:57.658 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:57.658 12:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:57.658 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:57.658 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:57.658 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:57.917 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:57.917 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:58.176 [2024-05-15 12:16:26.470886] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:58.176 request: 00:14:58.176 { 00:14:58.176 "uuid": "4b4de9be-7642-431c-aa80-9659703775d2", 00:14:58.176 "method": "bdev_lvol_get_lvstores", 00:14:58.176 "req_id": 1 00:14:58.176 } 00:14:58.176 Got JSON-RPC error response 00:14:58.176 response: 00:14:58.176 { 00:14:58.176 "code": -19, 00:14:58.176 "message": "No such device" 00:14:58.176 } 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:58.176 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:58.435 aio_bdev 00:14:58.435 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b8a4798a-c249-4f33-b2cb-27a59ca4135d 00:14:58.435 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=b8a4798a-c249-4f33-b2cb-27a59ca4135d 00:14:58.435 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:14:58.435 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:14:58.435 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:14:58.435 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:14:58.435 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:58.695 12:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b8a4798a-c249-4f33-b2cb-27a59ca4135d -t 2000 00:14:58.695 [ 00:14:58.695 { 00:14:58.695 "name": "b8a4798a-c249-4f33-b2cb-27a59ca4135d", 00:14:58.695 "aliases": [ 00:14:58.695 "lvs/lvol" 00:14:58.695 ], 00:14:58.695 "product_name": "Logical Volume", 00:14:58.695 "block_size": 4096, 00:14:58.695 "num_blocks": 38912, 00:14:58.695 "uuid": "b8a4798a-c249-4f33-b2cb-27a59ca4135d", 00:14:58.695 "assigned_rate_limits": { 00:14:58.695 "rw_ios_per_sec": 0, 00:14:58.695 "rw_mbytes_per_sec": 0, 00:14:58.695 "r_mbytes_per_sec": 0, 00:14:58.695 "w_mbytes_per_sec": 0 00:14:58.695 }, 00:14:58.695 "claimed": false, 00:14:58.695 "zoned": false, 00:14:58.695 "supported_io_types": { 00:14:58.695 "read": true, 00:14:58.695 "write": true, 00:14:58.695 "unmap": true, 00:14:58.695 "write_zeroes": true, 00:14:58.695 "flush": false, 00:14:58.695 "reset": true, 00:14:58.695 "compare": false, 00:14:58.695 "compare_and_write": false, 00:14:58.695 "abort": false, 00:14:58.695 "nvme_admin": false, 00:14:58.695 "nvme_io": false 00:14:58.695 }, 00:14:58.695 "driver_specific": { 00:14:58.695 "lvol": { 00:14:58.695 "lvol_store_uuid": "4b4de9be-7642-431c-aa80-9659703775d2", 00:14:58.695 "base_bdev": "aio_bdev", 00:14:58.695 "thin_provision": false, 00:14:58.695 "num_allocated_clusters": 38, 00:14:58.695 "snapshot": false, 00:14:58.695 "clone": false, 00:14:58.695 "esnap_clone": false 00:14:58.695 } 00:14:58.695 } 00:14:58.695 } 00:14:58.695 ] 00:14:58.695 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:14:58.695 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:58.695 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:58.954 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:58.954 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:58.955 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:59.214 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:59.214 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b8a4798a-c249-4f33-b2cb-27a59ca4135d 00:14:59.214 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b4de9be-7642-431c-aa80-9659703775d2 00:14:59.472 12:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:59.731 00:14:59.731 real 0m17.391s 00:14:59.731 user 0m43.613s 00:14:59.731 sys 0m4.833s 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:59.731 ************************************ 00:14:59.731 END TEST lvs_grow_dirty 00:14:59.731 ************************************ 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:59.731 nvmf_trace.0 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.731 rmmod nvme_tcp 00:14:59.731 rmmod nvme_fabrics 00:14:59.731 rmmod nvme_keyring 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2085165 ']' 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2085165 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 2085165 ']' 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 2085165 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:59.731 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2085165 00:14:59.989 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:59.989 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:59.989 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2085165' 00:14:59.989 killing process with pid 2085165 00:14:59.989 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 2085165 00:14:59.989 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 2085165 00:14:59.989 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.990 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.990 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.990 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.990 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.990 12:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.990 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.990 12:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.527 12:16:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.527 00:15:02.527 real 0m43.637s 00:15:02.527 user 1m4.465s 00:15:02.527 sys 0m12.534s 00:15:02.527 12:16:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:02.527 12:16:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:02.527 ************************************ 00:15:02.527 END TEST nvmf_lvs_grow 00:15:02.527 ************************************ 00:15:02.527 12:16:30 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:02.527 12:16:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:02.527 12:16:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:02.527 12:16:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.527 ************************************ 00:15:02.527 START TEST nvmf_bdev_io_wait 00:15:02.527 ************************************ 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:02.527 * Looking for test storage... 00:15:02.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.527 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.528 12:16:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.129 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:09.130 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:09.130 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:09.130 Found net devices under 0000:af:00.0: cvl_0_0 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:09.130 Found net devices under 0000:af:00.1: cvl_0_1 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:09.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:15:09.130 00:15:09.130 --- 10.0.0.2 ping statistics --- 00:15:09.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.130 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:15:09.130 00:15:09.130 --- 10.0.0.1 ping statistics --- 00:15:09.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.130 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2089469 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2089469 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 2089469 ']' 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:09.130 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.131 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:09.131 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.131 12:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:09.131 [2024-05-15 12:16:37.524726] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:09.131 [2024-05-15 12:16:37.524784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.131 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.131 [2024-05-15 12:16:37.600547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.390 [2024-05-15 12:16:37.681144] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.390 [2024-05-15 12:16:37.681178] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.390 [2024-05-15 12:16:37.681187] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.390 [2024-05-15 12:16:37.681217] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.390 [2024-05-15 12:16:37.681224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.390 [2024-05-15 12:16:37.681266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.390 [2024-05-15 12:16:37.681284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.390 [2024-05-15 12:16:37.681390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.390 [2024-05-15 12:16:37.681392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.959 [2024-05-15 12:16:38.442798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.959 Malloc0 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:09.959 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.219 [2024-05-15 12:16:38.507069] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:10.219 [2024-05-15 12:16:38.507311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2089739 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2089741 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:10.219 { 00:15:10.219 "params": { 00:15:10.219 "name": "Nvme$subsystem", 00:15:10.219 "trtype": "$TEST_TRANSPORT", 00:15:10.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.219 "adrfam": "ipv4", 00:15:10.219 "trsvcid": "$NVMF_PORT", 00:15:10.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.219 "hdgst": ${hdgst:-false}, 00:15:10.219 "ddgst": ${ddgst:-false} 00:15:10.219 }, 00:15:10.219 "method": "bdev_nvme_attach_controller" 00:15:10.219 } 00:15:10.219 EOF 00:15:10.219 )") 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2089743 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:10.219 { 00:15:10.219 "params": { 00:15:10.219 "name": "Nvme$subsystem", 00:15:10.219 "trtype": "$TEST_TRANSPORT", 00:15:10.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.219 "adrfam": "ipv4", 00:15:10.219 "trsvcid": "$NVMF_PORT", 00:15:10.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.219 "hdgst": ${hdgst:-false}, 00:15:10.219 "ddgst": ${ddgst:-false} 00:15:10.219 }, 00:15:10.219 "method": "bdev_nvme_attach_controller" 00:15:10.219 } 00:15:10.219 EOF 00:15:10.219 )") 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2089746 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:10.219 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:10.219 { 00:15:10.219 "params": { 00:15:10.219 "name": "Nvme$subsystem", 00:15:10.219 "trtype": "$TEST_TRANSPORT", 00:15:10.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.219 "adrfam": "ipv4", 00:15:10.219 "trsvcid": "$NVMF_PORT", 00:15:10.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.219 "hdgst": ${hdgst:-false}, 00:15:10.219 "ddgst": ${ddgst:-false} 00:15:10.219 }, 00:15:10.219 "method": "bdev_nvme_attach_controller" 00:15:10.219 } 00:15:10.219 EOF 00:15:10.219 )") 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:10.220 { 00:15:10.220 "params": { 00:15:10.220 "name": "Nvme$subsystem", 00:15:10.220 "trtype": "$TEST_TRANSPORT", 00:15:10.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.220 "adrfam": "ipv4", 00:15:10.220 "trsvcid": "$NVMF_PORT", 00:15:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.220 "hdgst": ${hdgst:-false}, 00:15:10.220 "ddgst": ${ddgst:-false} 00:15:10.220 }, 00:15:10.220 "method": "bdev_nvme_attach_controller" 00:15:10.220 } 00:15:10.220 EOF 00:15:10.220 )") 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2089739 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:10.220 "params": { 00:15:10.220 "name": "Nvme1", 00:15:10.220 "trtype": "tcp", 00:15:10.220 "traddr": "10.0.0.2", 00:15:10.220 "adrfam": "ipv4", 00:15:10.220 "trsvcid": "4420", 00:15:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.220 "hdgst": false, 00:15:10.220 "ddgst": false 00:15:10.220 }, 00:15:10.220 "method": "bdev_nvme_attach_controller" 00:15:10.220 }' 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:10.220 "params": { 00:15:10.220 "name": "Nvme1", 00:15:10.220 "trtype": "tcp", 00:15:10.220 "traddr": "10.0.0.2", 00:15:10.220 "adrfam": "ipv4", 00:15:10.220 "trsvcid": "4420", 00:15:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.220 "hdgst": false, 00:15:10.220 "ddgst": false 00:15:10.220 }, 00:15:10.220 "method": "bdev_nvme_attach_controller" 00:15:10.220 }' 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:10.220 "params": { 00:15:10.220 "name": "Nvme1", 00:15:10.220 "trtype": "tcp", 00:15:10.220 "traddr": "10.0.0.2", 00:15:10.220 "adrfam": "ipv4", 00:15:10.220 "trsvcid": "4420", 00:15:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.220 "hdgst": false, 00:15:10.220 "ddgst": false 00:15:10.220 }, 00:15:10.220 "method": "bdev_nvme_attach_controller" 00:15:10.220 }' 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:10.220 12:16:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:10.220 "params": { 00:15:10.220 "name": "Nvme1", 00:15:10.220 "trtype": "tcp", 00:15:10.220 "traddr": "10.0.0.2", 00:15:10.220 "adrfam": "ipv4", 00:15:10.220 "trsvcid": "4420", 00:15:10.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.220 "hdgst": false, 00:15:10.220 "ddgst": false 00:15:10.220 }, 00:15:10.220 "method": "bdev_nvme_attach_controller" 00:15:10.220 }' 00:15:10.220 [2024-05-15 12:16:38.558978] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:10.220 [2024-05-15 12:16:38.558980] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:10.220 [2024-05-15 12:16:38.559025] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 12:16:38.559026] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:10.220 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:10.220 [2024-05-15 12:16:38.559901] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:10.220 [2024-05-15 12:16:38.559954] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:10.220 [2024-05-15 12:16:38.560741] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:10.220 [2024-05-15 12:16:38.560788] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:10.220 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.220 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.220 [2024-05-15 12:16:38.716335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.220 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.479 [2024-05-15 12:16:38.774769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.479 [2024-05-15 12:16:38.791045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:10.479 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.479 [2024-05-15 12:16:38.844998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:10.479 [2024-05-15 12:16:38.867863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.479 [2024-05-15 12:16:38.942530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:10.479 [2024-05-15 12:16:38.954423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.739 [2024-05-15 12:16:39.048522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:10.739 Running I/O for 1 seconds... 00:15:10.739 Running I/O for 1 seconds... 00:15:10.739 Running I/O for 1 seconds... 00:15:10.739 Running I/O for 1 seconds... 00:15:11.675 00:15:11.675 Latency(us) 00:15:11.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.675 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:11.675 Nvme1n1 : 1.01 14163.53 55.33 0.00 0.00 9007.56 3263.69 16882.07 00:15:11.675 =================================================================================================================== 00:15:11.675 Total : 14163.53 55.33 0.00 0.00 9007.56 3263.69 16882.07 00:15:11.675 00:15:11.675 Latency(us) 00:15:11.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.675 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:11.675 Nvme1n1 : 1.01 6383.14 24.93 0.00 0.00 19920.88 7444.89 24641.54 00:15:11.675 =================================================================================================================== 00:15:11.675 Total : 6383.14 24.93 0.00 0.00 19920.88 7444.89 24641.54 00:15:11.934 00:15:11.934 Latency(us) 00:15:11.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.934 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:11.934 Nvme1n1 : 1.00 256896.03 1003.50 0.00 0.00 496.63 204.80 1343.49 00:15:11.934 =================================================================================================================== 00:15:11.934 Total : 256896.03 1003.50 0.00 0.00 496.63 204.80 1343.49 00:15:11.934 00:15:11.934 Latency(us) 00:15:11.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.934 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:11.934 Nvme1n1 : 1.01 6445.72 25.18 0.00 0.00 19803.56 5164.24 43830.48 00:15:11.934 =================================================================================================================== 00:15:11.934 Total : 6445.72 25.18 0.00 0.00 19803.56 5164.24 43830.48 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2089741 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2089743 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2089746 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.193 rmmod nvme_tcp 00:15:12.193 rmmod nvme_fabrics 00:15:12.193 rmmod nvme_keyring 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2089469 ']' 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2089469 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 2089469 ']' 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 2089469 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2089469 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2089469' 00:15:12.193 killing process with pid 2089469 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 2089469 00:15:12.193 [2024-05-15 12:16:40.654087] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:12.193 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 2089469 00:15:12.452 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.452 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.452 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.452 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.452 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.452 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.452 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.452 12:16:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.981 12:16:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:14.981 00:15:14.981 real 0m12.254s 00:15:14.981 user 0m19.789s 00:15:14.981 sys 0m6.935s 00:15:14.981 12:16:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:14.981 12:16:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:14.981 ************************************ 00:15:14.981 END TEST nvmf_bdev_io_wait 00:15:14.981 ************************************ 00:15:14.981 12:16:42 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:14.981 12:16:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:14.981 12:16:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:14.981 12:16:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.981 ************************************ 00:15:14.981 START TEST nvmf_queue_depth 00:15:14.981 ************************************ 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:14.981 * Looking for test storage... 00:15:14.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.981 12:16:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:14.982 12:16:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.552 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:21.553 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:21.553 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:21.553 Found net devices under 0000:af:00.0: cvl_0_0 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:21.553 Found net devices under 0000:af:00.1: cvl_0_1 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:21.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:15:21.553 00:15:21.553 --- 10.0.0.2 ping statistics --- 00:15:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.553 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:15:21.553 00:15:21.553 --- 10.0.0.1 ping statistics --- 00:15:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.553 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2093729 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2093729 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2093729 ']' 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:21.553 12:16:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.553 [2024-05-15 12:16:49.669938] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:21.553 [2024-05-15 12:16:49.669984] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.553 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.553 [2024-05-15 12:16:49.743048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.553 [2024-05-15 12:16:49.815433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.553 [2024-05-15 12:16:49.815467] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.553 [2024-05-15 12:16:49.815476] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.553 [2024-05-15 12:16:49.815487] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.553 [2024-05-15 12:16:49.815510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.553 [2024-05-15 12:16:49.815534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:22.122 [2024-05-15 12:16:50.513946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:22.122 Malloc0 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.122 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:22.123 [2024-05-15 12:16:50.573459] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:22.123 [2024-05-15 12:16:50.573681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2094005 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2094005 /var/tmp/bdevperf.sock 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2094005 ']' 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:22.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:22.123 12:16:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:22.123 [2024-05-15 12:16:50.623242] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:22.123 [2024-05-15 12:16:50.623287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094005 ] 00:15:22.382 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.382 [2024-05-15 12:16:50.693513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.382 [2024-05-15 12:16:50.763972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.951 12:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:22.951 12:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:15:22.951 12:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:22.951 12:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.951 12:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:23.211 NVMe0n1 00:15:23.211 12:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.211 12:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:23.211 Running I/O for 10 seconds... 00:15:35.426 00:15:35.426 Latency(us) 00:15:35.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.426 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:35.426 Verification LBA range: start 0x0 length 0x4000 00:15:35.426 NVMe0n1 : 10.06 12883.67 50.33 0.00 0.00 79193.48 17406.36 61236.84 00:15:35.426 =================================================================================================================== 00:15:35.426 Total : 12883.67 50.33 0.00 0.00 79193.48 17406.36 61236.84 00:15:35.426 0 00:15:35.426 12:17:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2094005 00:15:35.426 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2094005 ']' 00:15:35.426 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2094005 00:15:35.426 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:15:35.426 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:35.426 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2094005 00:15:35.426 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:35.427 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:35.427 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2094005' 00:15:35.427 killing process with pid 2094005 00:15:35.427 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2094005 00:15:35.427 Received shutdown signal, test time was about 10.000000 seconds 00:15:35.427 00:15:35.427 Latency(us) 00:15:35.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.427 =================================================================================================================== 00:15:35.427 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:35.427 12:17:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2094005 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:35.427 rmmod nvme_tcp 00:15:35.427 rmmod nvme_fabrics 00:15:35.427 rmmod nvme_keyring 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2093729 ']' 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2093729 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2093729 ']' 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2093729 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2093729 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2093729' 00:15:35.427 killing process with pid 2093729 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2093729 00:15:35.427 [2024-05-15 12:17:02.174324] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2093729 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.427 12:17:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.994 12:17:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:35.994 00:15:35.994 real 0m21.446s 00:15:35.994 user 0m24.840s 00:15:35.994 sys 0m6.865s 00:15:35.994 12:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:35.994 12:17:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.994 ************************************ 00:15:35.994 END TEST nvmf_queue_depth 00:15:35.994 ************************************ 00:15:35.994 12:17:04 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:35.994 12:17:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:35.994 12:17:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:35.994 12:17:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.253 ************************************ 00:15:36.253 START TEST nvmf_target_multipath 00:15:36.253 ************************************ 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:36.253 * Looking for test storage... 00:15:36.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.253 12:17:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:42.825 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:42.825 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:42.825 Found net devices under 0000:af:00.0: cvl_0_0 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:42.825 Found net devices under 0000:af:00.1: cvl_0_1 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.825 12:17:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:42.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:15:42.825 00:15:42.825 --- 10.0.0.2 ping statistics --- 00:15:42.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.825 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:15:42.825 00:15:42.825 --- 10.0.0.1 ping statistics --- 00:15:42.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.825 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:42.825 only one NIC for nvmf test 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:42.825 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.826 rmmod nvme_tcp 00:15:42.826 rmmod nvme_fabrics 00:15:42.826 rmmod nvme_keyring 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.826 12:17:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.365 00:15:45.365 real 0m8.836s 00:15:45.365 user 0m1.770s 00:15:45.365 sys 0m5.081s 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:45.365 12:17:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:45.365 ************************************ 00:15:45.365 END TEST nvmf_target_multipath 00:15:45.365 ************************************ 00:15:45.365 12:17:13 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:45.365 12:17:13 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:45.365 12:17:13 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:45.365 12:17:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.365 ************************************ 00:15:45.365 START TEST nvmf_zcopy 00:15:45.365 ************************************ 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:45.365 * Looking for test storage... 00:15:45.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.365 12:17:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:51.941 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:51.941 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.941 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:51.941 Found net devices under 0000:af:00.0: cvl_0_0 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:51.942 Found net devices under 0000:af:00.1: cvl_0_1 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:51.942 12:17:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:51.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:15:51.942 00:15:51.942 --- 10.0.0.2 ping statistics --- 00:15:51.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.942 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:51.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:15:51.942 00:15:51.942 --- 10.0.0.1 ping statistics --- 00:15:51.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.942 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2103264 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2103264 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 2103264 ']' 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:51.942 12:17:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:51.942 [2024-05-15 12:17:20.333722] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:51.942 [2024-05-15 12:17:20.333772] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.942 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.942 [2024-05-15 12:17:20.408307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.202 [2024-05-15 12:17:20.482243] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.202 [2024-05-15 12:17:20.482276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.202 [2024-05-15 12:17:20.482286] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.202 [2024-05-15 12:17:20.482295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.202 [2024-05-15 12:17:20.482305] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.202 [2024-05-15 12:17:20.482324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.771 [2024-05-15 12:17:21.178349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.771 [2024-05-15 12:17:21.198349] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:52.771 [2024-05-15 12:17:21.198535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.771 malloc0 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:52.771 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:52.771 { 00:15:52.771 "params": { 00:15:52.771 "name": "Nvme$subsystem", 00:15:52.771 "trtype": "$TEST_TRANSPORT", 00:15:52.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.771 "adrfam": "ipv4", 00:15:52.771 "trsvcid": "$NVMF_PORT", 00:15:52.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.772 "hdgst": ${hdgst:-false}, 00:15:52.772 "ddgst": ${ddgst:-false} 00:15:52.772 }, 00:15:52.772 "method": "bdev_nvme_attach_controller" 00:15:52.772 } 00:15:52.772 EOF 00:15:52.772 )") 00:15:52.772 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:52.772 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:52.772 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:52.772 12:17:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:52.772 "params": { 00:15:52.772 "name": "Nvme1", 00:15:52.772 "trtype": "tcp", 00:15:52.772 "traddr": "10.0.0.2", 00:15:52.772 "adrfam": "ipv4", 00:15:52.772 "trsvcid": "4420", 00:15:52.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.772 "hdgst": false, 00:15:52.772 "ddgst": false 00:15:52.772 }, 00:15:52.772 "method": "bdev_nvme_attach_controller" 00:15:52.772 }' 00:15:52.772 [2024-05-15 12:17:21.279828] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:15:52.772 [2024-05-15 12:17:21.279873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103323 ] 00:15:53.031 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.031 [2024-05-15 12:17:21.349706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.031 [2024-05-15 12:17:21.418919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.290 Running I/O for 10 seconds... 00:16:03.321 00:16:03.321 Latency(us) 00:16:03.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.321 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:03.321 Verification LBA range: start 0x0 length 0x1000 00:16:03.321 Nvme1n1 : 10.02 8674.47 67.77 0.00 0.00 14714.28 2097.15 44040.19 00:16:03.321 =================================================================================================================== 00:16:03.321 Total : 8674.47 67.77 0.00 0.00 14714.28 2097.15 44040.19 00:16:03.580 12:17:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2105159 00:16:03.580 12:17:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:03.581 { 00:16:03.581 "params": { 00:16:03.581 "name": "Nvme$subsystem", 00:16:03.581 "trtype": "$TEST_TRANSPORT", 00:16:03.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:03.581 "adrfam": "ipv4", 00:16:03.581 "trsvcid": "$NVMF_PORT", 00:16:03.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:03.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:03.581 "hdgst": ${hdgst:-false}, 00:16:03.581 "ddgst": ${ddgst:-false} 00:16:03.581 }, 00:16:03.581 "method": "bdev_nvme_attach_controller" 00:16:03.581 } 00:16:03.581 EOF 00:16:03.581 )") 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:03.581 [2024-05-15 12:17:31.871132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.871166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:03.581 12:17:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:03.581 "params": { 00:16:03.581 "name": "Nvme1", 00:16:03.581 "trtype": "tcp", 00:16:03.581 "traddr": "10.0.0.2", 00:16:03.581 "adrfam": "ipv4", 00:16:03.581 "trsvcid": "4420", 00:16:03.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:03.581 "hdgst": false, 00:16:03.581 "ddgst": false 00:16:03.581 }, 00:16:03.581 "method": "bdev_nvme_attach_controller" 00:16:03.581 }' 00:16:03.581 [2024-05-15 12:17:31.883135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.883150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:31.895162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.895173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:31.907200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.907211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:31.910625] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:16:03.581 [2024-05-15 12:17:31.910668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105159 ] 00:16:03.581 [2024-05-15 12:17:31.919231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.919243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:31.931257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.931267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.581 [2024-05-15 12:17:31.943291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.943302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:31.955322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.955333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:31.967355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.967365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:31.979384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.979394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:31.980556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.581 [2024-05-15 12:17:31.991416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:31.991430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.003448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.003459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.015482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.015498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.027516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.027533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.039545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.039555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.050984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.581 [2024-05-15 12:17:32.051578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.051591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.063615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.063632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.075646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.075663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.087676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.087690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.581 [2024-05-15 12:17:32.099706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.581 [2024-05-15 12:17:32.099719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.840 [2024-05-15 12:17:32.111742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.840 [2024-05-15 12:17:32.111755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.840 [2024-05-15 12:17:32.123769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.840 [2024-05-15 12:17:32.123781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.840 [2024-05-15 12:17:32.135803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.840 [2024-05-15 12:17:32.135814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.840 [2024-05-15 12:17:32.147854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.840 [2024-05-15 12:17:32.147874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.840 [2024-05-15 12:17:32.159875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.840 [2024-05-15 12:17:32.159890] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.171907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.171923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.183938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.183949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.195970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.195980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.208007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.208021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.220040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.220055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.232070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.232080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.244102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.244113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.256134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.256145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.268168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.268182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.280203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.280214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.292239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.292253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.304271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.304283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.316305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.316317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.328337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.328348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.340369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.340379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.352404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.352416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.841 [2024-05-15 12:17:32.364440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.841 [2024-05-15 12:17:32.364458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 Running I/O for 5 seconds... 00:16:04.101 [2024-05-15 12:17:32.376471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.376485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.399488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.399509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.409807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.409828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.424482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.424502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.438315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.438336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.451795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.451816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.465216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.465236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.478887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.478908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.492894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.492914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.504683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.504703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.518421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.518441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.532449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.532468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.543718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.543743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.558001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.558021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.571216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.571236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.585021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.585041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.598499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.598520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.611860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.611880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.101 [2024-05-15 12:17:32.624755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.101 [2024-05-15 12:17:32.624775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.360 [2024-05-15 12:17:32.636713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.360 [2024-05-15 12:17:32.636733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.360 [2024-05-15 12:17:32.650692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.360 [2024-05-15 12:17:32.650711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.360 [2024-05-15 12:17:32.664143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.360 [2024-05-15 12:17:32.664163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.360 [2024-05-15 12:17:32.677765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.360 [2024-05-15 12:17:32.677785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.360 [2024-05-15 12:17:32.691012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.360 [2024-05-15 12:17:32.691031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.360 [2024-05-15 12:17:32.705141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.360 [2024-05-15 12:17:32.705161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.360 [2024-05-15 12:17:32.718340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.718360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.732737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.732756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.748370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.748391] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.762566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.762587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.775995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.776015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.789989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.790009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.799103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.799129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.815087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.815107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.829726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.829747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.843326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.843347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.857238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.857258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.871129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.871149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.361 [2024-05-15 12:17:32.884650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.361 [2024-05-15 12:17:32.884670] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:32.898964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:32.898984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:32.912099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:32.912118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:32.927846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:32.927865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:32.939922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:32.939941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:32.954788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:32.954807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:32.969377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:32.969397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:32.983012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:32.983032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:32.996964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:32.996983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:33.008262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:33.008282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:33.023327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:33.023346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:33.038579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:33.038599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:33.054709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:33.054729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:33.069020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:33.069040] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:33.083250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:33.083269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.620 [2024-05-15 12:17:33.098624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.620 [2024-05-15 12:17:33.098643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.621 [2024-05-15 12:17:33.112652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.621 [2024-05-15 12:17:33.112671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.621 [2024-05-15 12:17:33.128227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.621 [2024-05-15 12:17:33.128246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.621 [2024-05-15 12:17:33.142557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.621 [2024-05-15 12:17:33.142577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.154471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.154491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.168434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.168454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.180689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.180708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.194547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.194567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.207368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.207387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.221702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.221721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.237120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.237139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.251234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.251253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.264955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.264975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.278883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.278902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.293348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.293368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.307724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.307743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.879 [2024-05-15 12:17:33.322743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.879 [2024-05-15 12:17:33.322763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.880 [2024-05-15 12:17:33.336574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.880 [2024-05-15 12:17:33.336594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.880 [2024-05-15 12:17:33.350015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.880 [2024-05-15 12:17:33.350034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.880 [2024-05-15 12:17:33.364032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.880 [2024-05-15 12:17:33.364051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.880 [2024-05-15 12:17:33.378019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.880 [2024-05-15 12:17:33.378038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.880 [2024-05-15 12:17:33.391651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.880 [2024-05-15 12:17:33.391670] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.880 [2024-05-15 12:17:33.406726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.880 [2024-05-15 12:17:33.406745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.418904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.418923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.433269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.433288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.449209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.449233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.467151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.467170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.477241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.477261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.492330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.492349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.507300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.507319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.522154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.522173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.535868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.535888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.549985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.550005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.563009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.563029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.578880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.578899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.594210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.594230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.607850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.607871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.621375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.621396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.634814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.634834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.648674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.648695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.139 [2024-05-15 12:17:33.661921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.139 [2024-05-15 12:17:33.661941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.675677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.675698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.688919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.688939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.702426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.702446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.716260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.716279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.728135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.728155] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.741763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.741783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.754958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.754979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.768706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.768725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.782305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.782325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.796039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.796059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.809958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.809980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.823177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.823203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.836689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.836709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.850066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.850091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.863492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.863514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.877280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.877301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.891029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.891050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.904425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.904447] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.399 [2024-05-15 12:17:33.917907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.399 [2024-05-15 12:17:33.917926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:33.931920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:33.931939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:33.947067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:33.947087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:33.961497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:33.961516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:33.976025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:33.976046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:33.989909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:33.989928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:34.003116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:34.003135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:34.016619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:34.016639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:34.030411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:34.030431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:34.042397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:34.042416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:34.055897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:34.055917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:34.070212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:34.070232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.659 [2024-05-15 12:17:34.081323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.659 [2024-05-15 12:17:34.081343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.660 [2024-05-15 12:17:34.095333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.660 [2024-05-15 12:17:34.095353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.660 [2024-05-15 12:17:34.109178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.660 [2024-05-15 12:17:34.109208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.660 [2024-05-15 12:17:34.122772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.660 [2024-05-15 12:17:34.122792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.660 [2024-05-15 12:17:34.136352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.660 [2024-05-15 12:17:34.136372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.660 [2024-05-15 12:17:34.149950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.660 [2024-05-15 12:17:34.149970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.660 [2024-05-15 12:17:34.163416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.660 [2024-05-15 12:17:34.163437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.660 [2024-05-15 12:17:34.176943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.660 [2024-05-15 12:17:34.176963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.191357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.191376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.207011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.207035] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.220857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.220876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.234406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.234425] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.248075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.248094] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.261784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.261803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.275340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.275359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.288857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.288877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.302077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.302097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.316066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.316086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.329541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.329561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.343377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.343396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.356818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.356837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.370715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.370739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.384469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.384488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.398956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.398975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.410145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.410165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.424231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.424250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.920 [2024-05-15 12:17:34.437850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.920 [2024-05-15 12:17:34.437870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.453746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.453766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.468432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.468452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.480703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.480722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.494165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.494184] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.510906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.510925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.522456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.522476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.536750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.536769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.550701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.550721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.562359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.562378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.575955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.575974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.589263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.589283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.603366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.603386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.616920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.616939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.631724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.631748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.647134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.647154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.661308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.661328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.674755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.674774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.180 [2024-05-15 12:17:34.693742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.180 [2024-05-15 12:17:34.693762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.711297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.711318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.725958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.725978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.746585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.746604] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.760855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.760874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.771692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.771711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.786582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.786601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.801388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.801408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.816184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.816208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.831642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.831662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.845514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.845535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.858973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.858993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.872598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.872619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.886172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.886198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.901346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.901365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.918595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.918619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.936917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.936936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.949999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.950020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.440 [2024-05-15 12:17:34.964087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.440 [2024-05-15 12:17:34.964107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:34.977388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:34.977408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:34.992791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:34.992811] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.006831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.006851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.021152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.021171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.034665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.034685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.048718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.048739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.062016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.062036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.075842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.075862] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.089451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.089470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.103919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.103938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.119113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.700 [2024-05-15 12:17:35.119134] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.700 [2024-05-15 12:17:35.132984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.701 [2024-05-15 12:17:35.133003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.701 [2024-05-15 12:17:35.146491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.701 [2024-05-15 12:17:35.146510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.701 [2024-05-15 12:17:35.160090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.701 [2024-05-15 12:17:35.160110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.701 [2024-05-15 12:17:35.173619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.701 [2024-05-15 12:17:35.173639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.701 [2024-05-15 12:17:35.187452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.701 [2024-05-15 12:17:35.187472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.701 [2024-05-15 12:17:35.201055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.701 [2024-05-15 12:17:35.201075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.701 [2024-05-15 12:17:35.214632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.701 [2024-05-15 12:17:35.214653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.701 [2024-05-15 12:17:35.228507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.701 [2024-05-15 12:17:35.228528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.242179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.242207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.255763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.255783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.269063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.269083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.282814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.282834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.296469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.296488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.310783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.310803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.325812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.325833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.339794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.339813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.353655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.353675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.367233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.367253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.381118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.381138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.395066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.395086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.406971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.406991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.420879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.420898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.434327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.434357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.448156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.448176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.461745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.461765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.475232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.475252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.961 [2024-05-15 12:17:35.489173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.961 [2024-05-15 12:17:35.489202] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.502968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.502989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.514435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.514454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.528553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.528573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.542811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.542831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.553955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.553975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.568184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.568210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.581637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.581657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.594873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.594892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.608480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.608500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.622599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.622618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.634411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.634431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.648108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.648128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.659568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.659587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.673111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.673131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.686740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.686759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.700419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.700440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.712043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.712063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.726405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.726426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.221 [2024-05-15 12:17:35.739833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.221 [2024-05-15 12:17:35.739853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.753537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.753557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.767441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.767461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.780530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.780550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.794252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.794271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.808036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.808055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.821734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.821754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.835030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.835050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.848630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.848650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.862778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.862797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.878319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.878339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.892322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.892342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.906004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.906026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.919097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.919117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.932697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.932716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.946866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.946884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.962760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.962779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.976975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.976995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:35.988992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:35.989012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.481 [2024-05-15 12:17:36.002433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.481 [2024-05-15 12:17:36.002453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.015845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.015865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.029504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.029523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.043250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.043269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.056598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.056617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.070436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.070456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.084186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.084211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.097459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.097479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.111430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.111449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.126824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.126844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.140638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.140657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.154213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.154232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.167710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.167729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.181322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.181342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.194898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.194918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.208485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.208509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.221716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.221735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.235445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.235465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.251024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.251044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.741 [2024-05-15 12:17:36.266170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.741 [2024-05-15 12:17:36.266189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.281385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.281405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.295516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.295535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.309332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.309352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.323275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.323295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.334841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.334860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.349143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.349163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.361407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.361427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.375351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.375371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.390275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.390294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.406053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.406072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.421316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.421336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.435202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.435222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.450673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.450692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.464428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.464447] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.478284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.478308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.493784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.493803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.508907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.508926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.002 [2024-05-15 12:17:36.522356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.002 [2024-05-15 12:17:36.522375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.536873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.536892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.552124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.552144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.566284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.566304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.583023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.583042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.599326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.599346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.614459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.614478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.628551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.628570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.641740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.641759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.656130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.656149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.671509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.671529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.685200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.685219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.699768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.699787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.715041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.715061] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.729279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.729299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.743996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.744017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.759562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.759586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.773344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.773365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.262 [2024-05-15 12:17:36.786744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.262 [2024-05-15 12:17:36.786765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.800017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.800038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.813795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.813818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.827269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.827290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.841059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.841080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.852713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.852733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.866423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.866443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.880282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.880302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.894198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.894217] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.905369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.905389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.919490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.919527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.933520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.933540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.946723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.946743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.960454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.960474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.974509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.974529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.985674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.985694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:36.999548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:36.999569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:37.012939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:37.012967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:37.026601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:37.026621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.522 [2024-05-15 12:17:37.040227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.522 [2024-05-15 12:17:37.040247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.053929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.053951] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.067543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.067564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.081147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.081167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.095027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.095047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.108978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.108998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.122933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.122953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.136613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.136633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.150043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.150063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.163233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.163252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.176530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.176550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.190252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.190271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.203778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.203798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.217466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.217485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.231417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.231436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.245174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.245199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.258997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.259018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.273439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.273458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.288852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.288872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.781 [2024-05-15 12:17:37.302961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.781 [2024-05-15 12:17:37.302981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.317083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.317101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.333046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.333066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.346588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.346607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.359891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.359911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.375324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.375343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.389940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.389960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 00:16:09.040 Latency(us) 00:16:09.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.040 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:09.040 Nvme1n1 : 5.01 16859.76 131.72 0.00 0.00 7585.99 2490.37 31667.00 00:16:09.040 =================================================================================================================== 00:16:09.040 Total : 16859.76 131.72 0.00 0.00 7585.99 2490.37 31667.00 00:16:09.040 [2024-05-15 12:17:37.399338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.399355] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.411368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.411385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.423403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.423418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.435438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.435455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.447466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.447482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.459492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.459505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.471524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.471537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.483555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.483569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.495585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.040 [2024-05-15 12:17:37.495599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.040 [2024-05-15 12:17:37.507616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.041 [2024-05-15 12:17:37.507626] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.041 [2024-05-15 12:17:37.519649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.041 [2024-05-15 12:17:37.519660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.041 [2024-05-15 12:17:37.531683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.041 [2024-05-15 12:17:37.531695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.041 [2024-05-15 12:17:37.543712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.041 [2024-05-15 12:17:37.543723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.041 [2024-05-15 12:17:37.555743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.041 [2024-05-15 12:17:37.555754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.041 [2024-05-15 12:17:37.567779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.041 [2024-05-15 12:17:37.567791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.299 [2024-05-15 12:17:37.579809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.299 [2024-05-15 12:17:37.579821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.299 [2024-05-15 12:17:37.591841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.299 [2024-05-15 12:17:37.591852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2105159) - No such process 00:16:09.299 12:17:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2105159 00:16:09.299 12:17:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.299 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.299 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.299 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.299 12:17:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:09.299 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.300 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.300 delay0 00:16:09.300 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.300 12:17:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:09.300 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.300 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.300 12:17:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.300 12:17:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:09.300 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.300 [2024-05-15 12:17:37.679918] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:15.865 Initializing NVMe Controllers 00:16:15.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:15.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:15.865 Initialization complete. Launching workers. 00:16:15.865 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 148 00:16:15.865 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 437, failed to submit 31 00:16:15.865 success 273, unsuccess 164, failed 0 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:15.865 rmmod nvme_tcp 00:16:15.865 rmmod nvme_fabrics 00:16:15.865 rmmod nvme_keyring 00:16:15.865 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2103264 ']' 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2103264 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 2103264 ']' 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 2103264 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2103264 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2103264' 00:16:15.866 killing process with pid 2103264 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 2103264 00:16:15.866 [2024-05-15 12:17:43.928726] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:15.866 12:17:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 2103264 00:16:15.866 12:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.866 12:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.866 12:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.866 12:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.866 12:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.866 12:17:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.866 12:17:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.866 12:17:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.771 12:17:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.771 00:16:17.771 real 0m32.716s 00:16:17.771 user 0m42.170s 00:16:17.771 sys 0m13.168s 00:16:17.771 12:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:17.771 12:17:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:17.771 ************************************ 00:16:17.771 END TEST nvmf_zcopy 00:16:17.771 ************************************ 00:16:17.771 12:17:46 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:17.771 12:17:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:17.771 12:17:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:17.771 12:17:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:18.031 ************************************ 00:16:18.031 START TEST nvmf_nmic 00:16:18.031 ************************************ 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:18.031 * Looking for test storage... 00:16:18.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:18.031 12:17:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:24.663 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.663 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:24.663 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:24.663 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:24.663 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:24.663 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:24.664 12:17:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:24.664 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:24.664 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:24.664 Found net devices under 0000:af:00.0: cvl_0_0 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:24.664 Found net devices under 0000:af:00.1: cvl_0_1 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:24.664 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:24.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:16:24.923 00:16:24.923 --- 10.0.0.2 ping statistics --- 00:16:24.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.923 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:16:24.923 00:16:24.923 --- 10.0.0.1 ping statistics --- 00:16:24.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.923 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2110960 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2110960 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 2110960 ']' 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:24.923 12:17:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:24.923 [2024-05-15 12:17:53.415733] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:16:24.923 [2024-05-15 12:17:53.415778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.923 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.182 [2024-05-15 12:17:53.488760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.182 [2024-05-15 12:17:53.559063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.182 [2024-05-15 12:17:53.559105] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.182 [2024-05-15 12:17:53.559114] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.182 [2024-05-15 12:17:53.559122] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.182 [2024-05-15 12:17:53.559145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.182 [2024-05-15 12:17:53.559209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.182 [2024-05-15 12:17:53.559265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.182 [2024-05-15 12:17:53.559349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.182 [2024-05-15 12:17:53.559350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:25.750 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:25.750 [2024-05-15 12:17:54.274049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.011 Malloc0 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.011 [2024-05-15 12:17:54.328391] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:26.011 [2024-05-15 12:17:54.328648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:26.011 test case1: single bdev can't be used in multiple subsystems 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.011 [2024-05-15 12:17:54.352510] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:26.011 [2024-05-15 12:17:54.352530] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:26.011 [2024-05-15 12:17:54.352539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.011 request: 00:16:26.011 { 00:16:26.011 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:26.011 "namespace": { 00:16:26.011 "bdev_name": "Malloc0", 00:16:26.011 "no_auto_visible": false 00:16:26.011 }, 00:16:26.011 "method": "nvmf_subsystem_add_ns", 00:16:26.011 "req_id": 1 00:16:26.011 } 00:16:26.011 Got JSON-RPC error response 00:16:26.011 response: 00:16:26.011 { 00:16:26.011 "code": -32602, 00:16:26.011 "message": "Invalid parameters" 00:16:26.011 } 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:26.011 Adding namespace failed - expected result. 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:26.011 test case2: host connect to nvmf target in multiple paths 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:26.011 [2024-05-15 12:17:54.368666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:26.011 12:17:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.387 12:17:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:28.763 12:17:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.763 12:17:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:16:28.763 12:17:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.763 12:17:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:16:28.763 12:17:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:16:30.668 12:17:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:30.668 12:17:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:30.668 12:17:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.668 12:17:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:16:30.668 12:17:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.668 12:17:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:16:30.668 12:17:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:30.668 [global] 00:16:30.668 thread=1 00:16:30.668 invalidate=1 00:16:30.668 rw=write 00:16:30.668 time_based=1 00:16:30.668 runtime=1 00:16:30.668 ioengine=libaio 00:16:30.668 direct=1 00:16:30.668 bs=4096 00:16:30.668 iodepth=1 00:16:30.668 norandommap=0 00:16:30.668 numjobs=1 00:16:30.668 00:16:30.668 verify_dump=1 00:16:30.668 verify_backlog=512 00:16:30.668 verify_state_save=0 00:16:30.668 do_verify=1 00:16:30.668 verify=crc32c-intel 00:16:30.668 [job0] 00:16:30.668 filename=/dev/nvme0n1 00:16:30.668 Could not set queue depth (nvme0n1) 00:16:30.926 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.927 fio-3.35 00:16:30.927 Starting 1 thread 00:16:32.306 00:16:32.306 job0: (groupid=0, jobs=1): err= 0: pid=2112194: Wed May 15 12:18:00 2024 00:16:32.306 read: IOPS=514, BW=2057KiB/s (2107kB/s)(2080KiB/1011msec) 00:16:32.306 slat (nsec): min=8741, max=44163, avg=9669.55, stdev=2586.86 00:16:32.306 clat (usec): min=432, max=42119, avg=1148.81, stdev=5093.78 00:16:32.306 lat (usec): min=441, max=42144, avg=1158.48, stdev=5095.55 00:16:32.306 clat percentiles (usec): 00:16:32.306 | 1.00th=[ 441], 5.00th=[ 453], 10.00th=[ 461], 20.00th=[ 474], 00:16:32.306 | 30.00th=[ 494], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 523], 00:16:32.306 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 611], 00:16:32.306 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:32.306 | 99.99th=[42206] 00:16:32.306 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:16:32.306 slat (usec): min=11, max=26970, avg=39.51, stdev=842.41 00:16:32.306 clat (usec): min=186, max=798, avg=355.60, stdev=96.76 00:16:32.306 lat (usec): min=198, max=27672, avg=395.11, stdev=858.74 00:16:32.306 clat percentiles (usec): 00:16:32.306 | 1.00th=[ 194], 5.00th=[ 210], 10.00th=[ 235], 20.00th=[ 277], 00:16:32.306 | 30.00th=[ 285], 40.00th=[ 314], 50.00th=[ 351], 60.00th=[ 379], 00:16:32.306 | 70.00th=[ 400], 80.00th=[ 437], 90.00th=[ 498], 95.00th=[ 502], 00:16:32.306 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 725], 99.95th=[ 799], 00:16:32.306 | 99.99th=[ 799] 00:16:32.306 bw ( KiB/s): min= 3832, max= 4360, per=100.00%, avg=4096.00, stdev=373.35, samples=2 00:16:32.306 iops : min= 958, max= 1090, avg=1024.00, stdev=93.34, samples=2 00:16:32.306 lat (usec) : 250=7.64%, 500=68.07%, 750=23.70%, 1000=0.06% 00:16:32.306 lat (msec) : 50=0.52% 00:16:32.306 cpu : usr=0.99%, sys=2.67%, ctx=1547, majf=0, minf=2 00:16:32.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.306 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.306 00:16:32.306 Run status group 0 (all jobs): 00:16:32.306 READ: bw=2057KiB/s (2107kB/s), 2057KiB/s-2057KiB/s (2107kB/s-2107kB/s), io=2080KiB (2130kB), run=1011-1011msec 00:16:32.306 WRITE: bw=4051KiB/s (4149kB/s), 4051KiB/s-4051KiB/s (4149kB/s-4149kB/s), io=4096KiB (4194kB), run=1011-1011msec 00:16:32.306 00:16:32.306 Disk stats (read/write): 00:16:32.306 nvme0n1: ios=542/1024, merge=0/0, ticks=1432/361, in_queue=1793, util=98.80% 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.306 rmmod nvme_tcp 00:16:32.306 rmmod nvme_fabrics 00:16:32.306 rmmod nvme_keyring 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:32.306 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2110960 ']' 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2110960 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 2110960 ']' 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 2110960 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2110960 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:32.566 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2110960' 00:16:32.566 killing process with pid 2110960 00:16:32.567 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 2110960 00:16:32.567 [2024-05-15 12:18:00.892050] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:32.567 12:18:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 2110960 00:16:32.825 12:18:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.825 12:18:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.825 12:18:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.825 12:18:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.825 12:18:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.825 12:18:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.825 12:18:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.825 12:18:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.733 12:18:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.733 00:16:34.733 real 0m16.891s 00:16:34.733 user 0m39.895s 00:16:34.733 sys 0m6.390s 00:16:34.733 12:18:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:34.733 12:18:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:34.733 ************************************ 00:16:34.733 END TEST nvmf_nmic 00:16:34.733 ************************************ 00:16:34.733 12:18:03 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:34.733 12:18:03 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:34.733 12:18:03 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:34.733 12:18:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.991 ************************************ 00:16:34.991 START TEST nvmf_fio_target 00:16:34.991 ************************************ 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:34.991 * Looking for test storage... 00:16:34.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.991 12:18:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:41.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:41.623 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.623 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:41.624 Found net devices under 0000:af:00.0: cvl_0_0 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:41.624 Found net devices under 0000:af:00.1: cvl_0_1 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:41.624 12:18:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:41.624 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:41.624 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:41.624 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:41.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:16:41.624 00:16:41.624 --- 10.0.0.2 ping statistics --- 00:16:41.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.624 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:16:41.624 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:41.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:16:41.624 00:16:41.624 --- 10.0.0.1 ping statistics --- 00:16:41.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.624 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2116712 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2116712 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 2116712 ']' 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:41.884 12:18:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.884 [2024-05-15 12:18:10.247596] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:16:41.884 [2024-05-15 12:18:10.247644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.884 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.884 [2024-05-15 12:18:10.321739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.884 [2024-05-15 12:18:10.396917] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.884 [2024-05-15 12:18:10.396956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.884 [2024-05-15 12:18:10.396966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.884 [2024-05-15 12:18:10.396975] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.884 [2024-05-15 12:18:10.396998] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.884 [2024-05-15 12:18:10.397049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.884 [2024-05-15 12:18:10.397146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.884 [2024-05-15 12:18:10.397217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.884 [2024-05-15 12:18:10.397220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.820 12:18:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:42.820 12:18:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:16:42.820 12:18:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.820 12:18:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:42.820 12:18:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.820 12:18:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.820 12:18:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:42.820 [2024-05-15 12:18:11.262599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.820 12:18:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.079 12:18:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:43.079 12:18:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.339 12:18:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:43.339 12:18:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.598 12:18:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:43.598 12:18:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.598 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:43.598 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:43.857 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.116 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:44.116 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.375 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:44.375 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.375 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:44.375 12:18:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:44.634 12:18:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:44.892 12:18:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:44.892 12:18:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.892 12:18:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:44.892 12:18:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.151 12:18:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.410 [2024-05-15 12:18:13.727065] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:45.410 [2024-05-15 12:18:13.727351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.410 12:18:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:45.410 12:18:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:45.668 12:18:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.046 12:18:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:47.046 12:18:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:16:47.046 12:18:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.046 12:18:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:16:47.046 12:18:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:16:47.046 12:18:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:16:48.952 12:18:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:48.952 12:18:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:48.952 12:18:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.952 12:18:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:16:48.952 12:18:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.952 12:18:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:16:48.952 12:18:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:48.952 [global] 00:16:48.952 thread=1 00:16:48.952 invalidate=1 00:16:48.952 rw=write 00:16:48.952 time_based=1 00:16:48.952 runtime=1 00:16:48.952 ioengine=libaio 00:16:48.952 direct=1 00:16:48.952 bs=4096 00:16:48.952 iodepth=1 00:16:48.952 norandommap=0 00:16:48.952 numjobs=1 00:16:48.952 00:16:48.952 verify_dump=1 00:16:48.952 verify_backlog=512 00:16:48.952 verify_state_save=0 00:16:48.952 do_verify=1 00:16:48.953 verify=crc32c-intel 00:16:48.953 [job0] 00:16:48.953 filename=/dev/nvme0n1 00:16:48.953 [job1] 00:16:48.953 filename=/dev/nvme0n2 00:16:48.953 [job2] 00:16:48.953 filename=/dev/nvme0n3 00:16:48.953 [job3] 00:16:48.953 filename=/dev/nvme0n4 00:16:49.238 Could not set queue depth (nvme0n1) 00:16:49.238 Could not set queue depth (nvme0n2) 00:16:49.238 Could not set queue depth (nvme0n3) 00:16:49.238 Could not set queue depth (nvme0n4) 00:16:49.502 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.502 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.502 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.502 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.502 fio-3.35 00:16:49.502 Starting 4 threads 00:16:50.873 00:16:50.873 job0: (groupid=0, jobs=1): err= 0: pid=2118197: Wed May 15 12:18:19 2024 00:16:50.873 read: IOPS=20, BW=80.8KiB/s (82.7kB/s)(84.0KiB/1040msec) 00:16:50.873 slat (nsec): min=11544, max=25657, avg=24610.38, stdev=3004.98 00:16:50.873 clat (usec): min=41066, max=42048, avg=41854.11, stdev=282.70 00:16:50.873 lat (usec): min=41091, max=42074, avg=41878.72, stdev=283.82 00:16:50.873 clat percentiles (usec): 00:16:50.873 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:50.873 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:50.873 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:50.873 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:50.873 | 99.99th=[42206] 00:16:50.873 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:16:50.873 slat (usec): min=11, max=10628, avg=33.87, stdev=469.14 00:16:50.873 clat (usec): min=201, max=690, avg=275.46, stdev=78.60 00:16:50.873 lat (usec): min=213, max=11047, avg=309.34, stdev=482.07 00:16:50.873 clat percentiles (usec): 00:16:50.873 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:16:50.873 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:16:50.873 | 70.00th=[ 277], 80.00th=[ 338], 90.00th=[ 416], 95.00th=[ 424], 00:16:50.873 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 693], 99.95th=[ 693], 00:16:50.873 | 99.99th=[ 693] 00:16:50.873 bw ( KiB/s): min= 4096, max= 4096, per=26.31%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.873 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.873 lat (usec) : 250=59.29%, 500=35.27%, 750=1.50% 00:16:50.873 lat (msec) : 50=3.94% 00:16:50.873 cpu : usr=0.19%, sys=0.77%, ctx=536, majf=0, minf=2 00:16:50.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.873 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.873 job1: (groupid=0, jobs=1): err= 0: pid=2118217: Wed May 15 12:18:19 2024 00:16:50.873 read: IOPS=1143, BW=4575KiB/s (4685kB/s)(4580KiB/1001msec) 00:16:50.873 slat (nsec): min=8609, max=40507, avg=9341.77, stdev=1335.44 00:16:50.873 clat (usec): min=321, max=3130, avg=493.13, stdev=92.52 00:16:50.873 lat (usec): min=330, max=3141, avg=502.47, stdev=92.56 00:16:50.873 clat percentiles (usec): 00:16:50.873 | 1.00th=[ 334], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 482], 00:16:50.873 | 30.00th=[ 494], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 506], 00:16:50.873 | 70.00th=[ 510], 80.00th=[ 515], 90.00th=[ 523], 95.00th=[ 529], 00:16:50.873 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 660], 99.95th=[ 3130], 00:16:50.873 | 99.99th=[ 3130] 00:16:50.873 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:50.873 slat (nsec): min=11504, max=49228, avg=12814.66, stdev=2027.69 00:16:50.873 clat (usec): min=203, max=668, avg=258.95, stdev=66.03 00:16:50.873 lat (usec): min=215, max=710, avg=271.76, stdev=66.34 00:16:50.873 clat percentiles (usec): 00:16:50.873 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:16:50.873 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 241], 00:16:50.873 | 70.00th=[ 253], 80.00th=[ 281], 90.00th=[ 359], 95.00th=[ 429], 00:16:50.873 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 553], 99.95th=[ 668], 00:16:50.873 | 99.99th=[ 668] 00:16:50.873 bw ( KiB/s): min= 6832, max= 6832, per=43.88%, avg=6832.00, stdev= 0.00, samples=1 00:16:50.873 iops : min= 1708, max= 1708, avg=1708.00, stdev= 0.00, samples=1 00:16:50.873 lat (usec) : 250=39.50%, 500=36.07%, 750=24.39% 00:16:50.873 lat (msec) : 4=0.04% 00:16:50.873 cpu : usr=2.50%, sys=4.70%, ctx=2681, majf=0, minf=1 00:16:50.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.873 issued rwts: total=1145,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.873 job2: (groupid=0, jobs=1): err= 0: pid=2118240: Wed May 15 12:18:19 2024 00:16:50.873 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:50.873 slat (nsec): min=8961, max=40758, avg=9652.62, stdev=1400.40 00:16:50.873 clat (usec): min=348, max=757, avg=542.18, stdev=57.03 00:16:50.873 lat (usec): min=357, max=766, avg=551.83, stdev=56.99 00:16:50.873 clat percentiles (usec): 00:16:50.873 | 1.00th=[ 367], 5.00th=[ 388], 10.00th=[ 453], 20.00th=[ 529], 00:16:50.873 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 553], 60.00th=[ 562], 00:16:50.873 | 70.00th=[ 570], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 611], 00:16:50.873 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 676], 99.95th=[ 758], 00:16:50.873 | 99.99th=[ 758] 00:16:50.873 write: IOPS=1486, BW=5946KiB/s (6089kB/s)(5952KiB/1001msec); 0 zone resets 00:16:50.873 slat (nsec): min=7286, max=72010, avg=13004.07, stdev=2565.52 00:16:50.873 clat (usec): min=209, max=703, avg=275.09, stdev=64.82 00:16:50.873 lat (usec): min=221, max=775, avg=288.09, stdev=65.01 00:16:50.873 clat percentiles (usec): 00:16:50.873 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:16:50.873 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 262], 00:16:50.873 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 351], 95.00th=[ 408], 00:16:50.873 | 99.00th=[ 502], 99.50th=[ 545], 99.90th=[ 701], 99.95th=[ 701], 00:16:50.873 | 99.99th=[ 701] 00:16:50.873 bw ( KiB/s): min= 5464, max= 5464, per=35.09%, avg=5464.00, stdev= 0.00, samples=1 00:16:50.873 iops : min= 1366, max= 1366, avg=1366.00, stdev= 0.00, samples=1 00:16:50.873 lat (usec) : 250=27.71%, 500=36.43%, 750=35.83%, 1000=0.04% 00:16:50.873 cpu : usr=3.20%, sys=3.50%, ctx=2513, majf=0, minf=1 00:16:50.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.873 issued rwts: total=1024,1488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.873 job3: (groupid=0, jobs=1): err= 0: pid=2118248: Wed May 15 12:18:19 2024 00:16:50.873 read: IOPS=20, BW=81.2KiB/s (83.2kB/s)(84.0KiB/1034msec) 00:16:50.873 slat (nsec): min=11419, max=27411, avg=25384.90, stdev=3236.88 00:16:50.873 clat (usec): min=41073, max=43000, avg=42016.63, stdev=376.25 00:16:50.873 lat (usec): min=41099, max=43026, avg=42042.01, stdev=376.30 00:16:50.873 clat percentiles (usec): 00:16:50.873 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:50.873 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:50.873 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:16:50.873 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:50.873 | 99.99th=[43254] 00:16:50.873 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:16:50.873 slat (nsec): min=11848, max=39861, avg=13171.67, stdev=1856.21 00:16:50.873 clat (usec): min=221, max=679, avg=279.57, stdev=67.20 00:16:50.873 lat (usec): min=233, max=719, avg=292.74, stdev=67.87 00:16:50.873 clat percentiles (usec): 00:16:50.873 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:16:50.873 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:16:50.873 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 355], 95.00th=[ 441], 00:16:50.873 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 676], 99.95th=[ 676], 00:16:50.873 | 99.99th=[ 676] 00:16:50.873 bw ( KiB/s): min= 4096, max= 4096, per=26.31%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.873 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.873 lat (usec) : 250=37.71%, 500=54.78%, 750=3.56% 00:16:50.873 lat (msec) : 50=3.94% 00:16:50.873 cpu : usr=0.48%, sys=0.97%, ctx=533, majf=0, minf=1 00:16:50.873 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.873 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.873 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.873 00:16:50.873 Run status group 0 (all jobs): 00:16:50.873 READ: bw=8504KiB/s (8708kB/s), 80.8KiB/s-4575KiB/s (82.7kB/s-4685kB/s), io=8844KiB (9056kB), run=1001-1040msec 00:16:50.873 WRITE: bw=15.2MiB/s (15.9MB/s), 1969KiB/s-6138KiB/s (2016kB/s-6285kB/s), io=15.8MiB (16.6MB), run=1001-1040msec 00:16:50.873 00:16:50.873 Disk stats (read/write): 00:16:50.873 nvme0n1: ios=57/512, merge=0/0, ticks=868/140, in_queue=1008, util=86.07% 00:16:50.873 nvme0n2: ios=1074/1068, merge=0/0, ticks=575/266, in_queue=841, util=88.52% 00:16:50.873 nvme0n3: ios=994/1024, merge=0/0, ticks=588/272, in_queue=860, util=92.73% 00:16:50.873 nvme0n4: ios=72/512, merge=0/0, ticks=772/139, in_queue=911, util=97.51% 00:16:50.873 12:18:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:50.873 [global] 00:16:50.873 thread=1 00:16:50.873 invalidate=1 00:16:50.873 rw=randwrite 00:16:50.873 time_based=1 00:16:50.873 runtime=1 00:16:50.873 ioengine=libaio 00:16:50.873 direct=1 00:16:50.873 bs=4096 00:16:50.873 iodepth=1 00:16:50.873 norandommap=0 00:16:50.873 numjobs=1 00:16:50.873 00:16:50.874 verify_dump=1 00:16:50.874 verify_backlog=512 00:16:50.874 verify_state_save=0 00:16:50.874 do_verify=1 00:16:50.874 verify=crc32c-intel 00:16:50.874 [job0] 00:16:50.874 filename=/dev/nvme0n1 00:16:50.874 [job1] 00:16:50.874 filename=/dev/nvme0n2 00:16:50.874 [job2] 00:16:50.874 filename=/dev/nvme0n3 00:16:50.874 [job3] 00:16:50.874 filename=/dev/nvme0n4 00:16:50.874 Could not set queue depth (nvme0n1) 00:16:50.874 Could not set queue depth (nvme0n2) 00:16:50.874 Could not set queue depth (nvme0n3) 00:16:50.874 Could not set queue depth (nvme0n4) 00:16:51.142 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.142 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.142 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.142 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.142 fio-3.35 00:16:51.142 Starting 4 threads 00:16:52.514 00:16:52.514 job0: (groupid=0, jobs=1): err= 0: pid=2118662: Wed May 15 12:18:20 2024 00:16:52.514 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:16:52.514 slat (nsec): min=10995, max=25883, avg=24652.24, stdev=3143.22 00:16:52.514 clat (usec): min=41070, max=42523, avg=41957.32, stdev=242.15 00:16:52.514 lat (usec): min=41095, max=42534, avg=41981.97, stdev=240.49 00:16:52.514 clat percentiles (usec): 00:16:52.514 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:52.514 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:52.514 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:52.514 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:52.514 | 99.99th=[42730] 00:16:52.514 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:16:52.514 slat (nsec): min=11700, max=43446, avg=13063.86, stdev=2365.53 00:16:52.514 clat (usec): min=184, max=735, avg=229.68, stdev=49.58 00:16:52.514 lat (usec): min=196, max=778, avg=242.75, stdev=50.51 00:16:52.514 clat percentiles (usec): 00:16:52.514 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:16:52.514 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:16:52.514 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 330], 00:16:52.514 | 99.00th=[ 494], 99.50th=[ 506], 99.90th=[ 734], 99.95th=[ 734], 00:16:52.514 | 99.99th=[ 734] 00:16:52.514 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.514 lat (usec) : 250=84.99%, 500=10.32%, 750=0.75% 00:16:52.514 lat (msec) : 50=3.94% 00:16:52.514 cpu : usr=0.30%, sys=0.70%, ctx=536, majf=0, minf=2 00:16:52.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.514 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.514 job1: (groupid=0, jobs=1): err= 0: pid=2118677: Wed May 15 12:18:20 2024 00:16:52.514 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:16:52.514 slat (nsec): min=11615, max=31573, avg=25050.35, stdev=3968.12 00:16:52.514 clat (usec): min=41627, max=44001, avg=42051.72, stdev=467.95 00:16:52.514 lat (usec): min=41638, max=44032, avg=42076.77, stdev=469.86 00:16:52.514 clat percentiles (usec): 00:16:52.514 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:52.514 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:52.514 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:52.514 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:16:52.514 | 99.99th=[43779] 00:16:52.514 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:16:52.514 slat (nsec): min=11465, max=43835, avg=13593.71, stdev=2974.93 00:16:52.514 clat (usec): min=188, max=841, avg=298.61, stdev=76.82 00:16:52.514 lat (usec): min=201, max=885, avg=312.21, stdev=77.62 00:16:52.514 clat percentiles (usec): 00:16:52.514 | 1.00th=[ 200], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 249], 00:16:52.514 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 285], 00:16:52.514 | 70.00th=[ 306], 80.00th=[ 347], 90.00th=[ 396], 95.00th=[ 494], 00:16:52.514 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 840], 99.95th=[ 840], 00:16:52.514 | 99.99th=[ 840] 00:16:52.514 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.514 lat (usec) : 250=24.25%, 500=70.68%, 750=1.13%, 1000=0.19% 00:16:52.514 lat (msec) : 50=3.76% 00:16:52.514 cpu : usr=0.40%, sys=0.90%, ctx=533, majf=0, minf=1 00:16:52.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.514 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.514 job2: (groupid=0, jobs=1): err= 0: pid=2118686: Wed May 15 12:18:20 2024 00:16:52.514 read: IOPS=20, BW=81.5KiB/s (83.4kB/s)(84.0KiB/1031msec) 00:16:52.514 slat (nsec): min=13678, max=27704, avg=24489.29, stdev=2684.03 00:16:52.514 clat (usec): min=41015, max=42120, avg=41895.92, stdev=232.55 00:16:52.514 lat (usec): min=41040, max=42145, avg=41920.41, stdev=233.39 00:16:52.514 clat percentiles (usec): 00:16:52.514 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:52.514 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:52.514 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:52.514 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:52.514 | 99.99th=[42206] 00:16:52.514 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:16:52.514 slat (nsec): min=12691, max=43217, avg=13947.89, stdev=2200.82 00:16:52.514 clat (usec): min=193, max=682, avg=276.77, stdev=68.36 00:16:52.514 lat (usec): min=207, max=725, avg=290.72, stdev=68.92 00:16:52.514 clat percentiles (usec): 00:16:52.514 | 1.00th=[ 204], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:16:52.514 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 262], 00:16:52.514 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 359], 95.00th=[ 490], 00:16:52.514 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 685], 99.95th=[ 685], 00:16:52.514 | 99.99th=[ 685] 00:16:52.514 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.514 lat (usec) : 250=42.40%, 500=51.97%, 750=1.69% 00:16:52.514 lat (msec) : 50=3.94% 00:16:52.514 cpu : usr=0.78%, sys=0.78%, ctx=534, majf=0, minf=1 00:16:52.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.514 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.514 job3: (groupid=0, jobs=1): err= 0: pid=2118687: Wed May 15 12:18:20 2024 00:16:52.514 read: IOPS=20, BW=80.8KiB/s (82.8kB/s)(84.0KiB/1039msec) 00:16:52.514 slat (nsec): min=11516, max=31155, avg=24218.43, stdev=3305.64 00:16:52.514 clat (usec): min=40935, max=42024, avg=41869.85, stdev=287.93 00:16:52.514 lat (usec): min=40947, max=42049, avg=41894.07, stdev=290.17 00:16:52.514 clat percentiles (usec): 00:16:52.514 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:52.514 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:52.514 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:52.514 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:52.514 | 99.99th=[42206] 00:16:52.514 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:16:52.514 slat (nsec): min=12370, max=53118, avg=13963.41, stdev=3059.31 00:16:52.514 clat (usec): min=189, max=800, avg=293.51, stdev=71.80 00:16:52.514 lat (usec): min=202, max=840, avg=307.48, stdev=72.54 00:16:52.514 clat percentiles (usec): 00:16:52.514 | 1.00th=[ 200], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:16:52.514 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 285], 00:16:52.514 | 70.00th=[ 293], 80.00th=[ 334], 90.00th=[ 392], 95.00th=[ 486], 00:16:52.514 | 99.00th=[ 502], 99.50th=[ 506], 99.90th=[ 799], 99.95th=[ 799], 00:16:52.514 | 99.99th=[ 799] 00:16:52.514 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:16:52.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:52.514 lat (usec) : 250=25.33%, 500=69.23%, 750=1.31%, 1000=0.19% 00:16:52.514 lat (msec) : 50=3.94% 00:16:52.514 cpu : usr=0.67%, sys=0.77%, ctx=534, majf=0, minf=1 00:16:52.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.514 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.514 00:16:52.514 Run status group 0 (all jobs): 00:16:52.514 READ: bw=320KiB/s (327kB/s), 79.8KiB/s-83.3KiB/s (81.7kB/s-85.3kB/s), io=332KiB (340kB), run=1003-1039msec 00:16:52.514 WRITE: bw=7885KiB/s (8074kB/s), 1971KiB/s-2042KiB/s (2018kB/s-2091kB/s), io=8192KiB (8389kB), run=1003-1039msec 00:16:52.514 00:16:52.514 Disk stats (read/write): 00:16:52.514 nvme0n1: ios=52/512, merge=0/0, ticks=1726/114, in_queue=1840, util=97.09% 00:16:52.514 nvme0n2: ios=52/512, merge=0/0, ticks=1644/148, in_queue=1792, util=96.31% 00:16:52.514 nvme0n3: ios=38/512, merge=0/0, ticks=1595/134, in_queue=1729, util=100.00% 00:16:52.514 nvme0n4: ios=37/512, merge=0/0, ticks=1552/141, in_queue=1693, util=99.78% 00:16:52.514 12:18:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:52.514 [global] 00:16:52.514 thread=1 00:16:52.514 invalidate=1 00:16:52.514 rw=write 00:16:52.514 time_based=1 00:16:52.514 runtime=1 00:16:52.514 ioengine=libaio 00:16:52.514 direct=1 00:16:52.514 bs=4096 00:16:52.514 iodepth=128 00:16:52.514 norandommap=0 00:16:52.514 numjobs=1 00:16:52.514 00:16:52.514 verify_dump=1 00:16:52.514 verify_backlog=512 00:16:52.514 verify_state_save=0 00:16:52.514 do_verify=1 00:16:52.514 verify=crc32c-intel 00:16:52.515 [job0] 00:16:52.515 filename=/dev/nvme0n1 00:16:52.515 [job1] 00:16:52.515 filename=/dev/nvme0n2 00:16:52.515 [job2] 00:16:52.515 filename=/dev/nvme0n3 00:16:52.515 [job3] 00:16:52.515 filename=/dev/nvme0n4 00:16:52.515 Could not set queue depth (nvme0n1) 00:16:52.515 Could not set queue depth (nvme0n2) 00:16:52.515 Could not set queue depth (nvme0n3) 00:16:52.515 Could not set queue depth (nvme0n4) 00:16:52.772 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.772 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.772 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.772 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.772 fio-3.35 00:16:52.772 Starting 4 threads 00:16:54.143 00:16:54.143 job0: (groupid=0, jobs=1): err= 0: pid=2119111: Wed May 15 12:18:22 2024 00:16:54.143 read: IOPS=4035, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1015msec) 00:16:54.143 slat (nsec): min=1668, max=50399k, avg=110025.59, stdev=1006937.28 00:16:54.143 clat (usec): min=2253, max=71324, avg=14648.99, stdev=10312.85 00:16:54.143 lat (usec): min=2267, max=71351, avg=14759.02, stdev=10360.71 00:16:54.143 clat percentiles (usec): 00:16:54.143 | 1.00th=[ 5276], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[10028], 00:16:54.143 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12256], 60.00th=[13042], 00:16:54.143 | 70.00th=[13566], 80.00th=[14222], 90.00th=[20055], 95.00th=[32637], 00:16:54.143 | 99.00th=[66847], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:16:54.143 | 99.99th=[71828] 00:16:54.143 write: IOPS=4519, BW=17.7MiB/s (18.5MB/s)(17.9MiB/1015msec); 0 zone resets 00:16:54.143 slat (usec): min=2, max=12741, avg=111.74, stdev=664.51 00:16:54.143 clat (usec): min=1428, max=66525, avg=14997.20, stdev=6694.51 00:16:54.143 lat (usec): min=1497, max=66532, avg=15108.94, stdev=6723.47 00:16:54.143 clat percentiles (usec): 00:16:54.143 | 1.00th=[ 5735], 5.00th=[ 7046], 10.00th=[ 8225], 20.00th=[ 9896], 00:16:54.143 | 30.00th=[11338], 40.00th=[12125], 50.00th=[13304], 60.00th=[14484], 00:16:54.143 | 70.00th=[15926], 80.00th=[19268], 90.00th=[23725], 95.00th=[30278], 00:16:54.143 | 99.00th=[36439], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:16:54.143 | 99.99th=[66323] 00:16:54.143 bw ( KiB/s): min=16384, max=19296, per=27.67%, avg=17840.00, stdev=2059.09, samples=2 00:16:54.143 iops : min= 4096, max= 4824, avg=4460.00, stdev=514.77, samples=2 00:16:54.143 lat (msec) : 2=0.01%, 4=0.16%, 10=20.05%, 20=65.38%, 50=12.93% 00:16:54.143 lat (msec) : 100=1.46% 00:16:54.143 cpu : usr=2.76%, sys=5.03%, ctx=546, majf=0, minf=1 00:16:54.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:54.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.143 issued rwts: total=4096,4587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.143 job1: (groupid=0, jobs=1): err= 0: pid=2119112: Wed May 15 12:18:22 2024 00:16:54.143 read: IOPS=2953, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1009msec) 00:16:54.143 slat (nsec): min=1696, max=58106k, avg=159906.85, stdev=1597666.67 00:16:54.143 clat (usec): min=3648, max=72502, avg=20472.68, stdev=13760.26 00:16:54.143 lat (usec): min=3651, max=72507, avg=20632.59, stdev=13825.74 00:16:54.143 clat percentiles (usec): 00:16:54.143 | 1.00th=[ 6915], 5.00th=[10159], 10.00th=[11469], 20.00th=[12256], 00:16:54.143 | 30.00th=[14222], 40.00th=[15664], 50.00th=[16909], 60.00th=[18220], 00:16:54.143 | 70.00th=[20055], 80.00th=[21890], 90.00th=[28181], 95.00th=[64750], 00:16:54.143 | 99.00th=[69731], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:16:54.143 | 99.99th=[72877] 00:16:54.143 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:16:54.143 slat (usec): min=2, max=15652, avg=163.49, stdev=975.41 00:16:54.144 clat (usec): min=1845, max=71557, avg=21790.11, stdev=9829.33 00:16:54.144 lat (usec): min=1860, max=71563, avg=21953.60, stdev=9865.43 00:16:54.144 clat percentiles (usec): 00:16:54.144 | 1.00th=[ 4752], 5.00th=[ 8029], 10.00th=[ 9372], 20.00th=[12518], 00:16:54.144 | 30.00th=[15926], 40.00th=[17957], 50.00th=[20841], 60.00th=[23987], 00:16:54.144 | 70.00th=[26870], 80.00th=[30540], 90.00th=[34866], 95.00th=[39584], 00:16:54.144 | 99.00th=[45876], 99.50th=[47973], 99.90th=[71828], 99.95th=[71828], 00:16:54.144 | 99.99th=[71828] 00:16:54.144 bw ( KiB/s): min=12288, max=12288, per=19.06%, avg=12288.00, stdev= 0.00, samples=2 00:16:54.144 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:54.144 lat (msec) : 2=0.03%, 4=0.25%, 10=7.53%, 20=50.36%, 50=37.87% 00:16:54.144 lat (msec) : 100=3.95% 00:16:54.144 cpu : usr=2.38%, sys=3.57%, ctx=338, majf=0, minf=1 00:16:54.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.144 issued rwts: total=2980,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.144 job2: (groupid=0, jobs=1): err= 0: pid=2119115: Wed May 15 12:18:22 2024 00:16:54.144 read: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1007msec) 00:16:54.144 slat (usec): min=2, max=18204, avg=153.53, stdev=1072.25 00:16:54.144 clat (usec): min=1241, max=48730, avg=18775.89, stdev=6774.31 00:16:54.144 lat (usec): min=7636, max=48761, avg=18929.42, stdev=6846.46 00:16:54.144 clat percentiles (usec): 00:16:54.144 | 1.00th=[11076], 5.00th=[11994], 10.00th=[12780], 20.00th=[13566], 00:16:54.144 | 30.00th=[14091], 40.00th=[15008], 50.00th=[16712], 60.00th=[18744], 00:16:54.144 | 70.00th=[21103], 80.00th=[22676], 90.00th=[28705], 95.00th=[32113], 00:16:54.144 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[45876], 00:16:54.144 | 99.99th=[48497] 00:16:54.144 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:16:54.144 slat (usec): min=2, max=18712, avg=127.22, stdev=804.89 00:16:54.144 clat (usec): min=1036, max=44513, avg=18196.34, stdev=6286.10 00:16:54.144 lat (usec): min=1092, max=44542, avg=18323.56, stdev=6334.11 00:16:54.144 clat percentiles (usec): 00:16:54.144 | 1.00th=[ 4555], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[13435], 00:16:54.144 | 30.00th=[14877], 40.00th=[16450], 50.00th=[17433], 60.00th=[18220], 00:16:54.144 | 70.00th=[20055], 80.00th=[22152], 90.00th=[26870], 95.00th=[27919], 00:16:54.144 | 99.00th=[36439], 99.50th=[38536], 99.90th=[40633], 99.95th=[42206], 00:16:54.144 | 99.99th=[44303] 00:16:54.144 bw ( KiB/s): min=12288, max=16384, per=22.23%, avg=14336.00, stdev=2896.31, samples=2 00:16:54.144 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:16:54.144 lat (msec) : 2=0.10%, 4=0.38%, 10=2.62%, 20=65.65%, 50=31.25% 00:16:54.144 cpu : usr=3.08%, sys=5.07%, ctx=392, majf=0, minf=1 00:16:54.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.144 issued rwts: total=3316,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.144 job3: (groupid=0, jobs=1): err= 0: pid=2119116: Wed May 15 12:18:22 2024 00:16:54.144 read: IOPS=5004, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1005msec) 00:16:54.144 slat (nsec): min=1786, max=7965.5k, avg=89888.30, stdev=565312.62 00:16:54.144 clat (usec): min=3242, max=26281, avg=12395.66, stdev=3219.76 00:16:54.144 lat (usec): min=4125, max=26288, avg=12485.55, stdev=3222.13 00:16:54.144 clat percentiles (usec): 00:16:54.144 | 1.00th=[ 5866], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9896], 00:16:54.144 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11863], 60.00th=[12518], 00:16:54.144 | 70.00th=[13435], 80.00th=[14353], 90.00th=[16581], 95.00th=[19006], 00:16:54.144 | 99.00th=[22152], 99.50th=[24511], 99.90th=[26346], 99.95th=[26346], 00:16:54.144 | 99.99th=[26346] 00:16:54.144 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:16:54.144 slat (usec): min=2, max=19687, avg=96.12, stdev=592.96 00:16:54.144 clat (usec): min=1518, max=31745, avg=12662.58, stdev=4734.49 00:16:54.144 lat (usec): min=1536, max=31751, avg=12758.70, stdev=4742.38 00:16:54.144 clat percentiles (usec): 00:16:54.144 | 1.00th=[ 3884], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 8717], 00:16:54.144 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[12125], 60.00th=[13042], 00:16:54.144 | 70.00th=[14877], 80.00th=[16450], 90.00th=[18482], 95.00th=[21627], 00:16:54.144 | 99.00th=[26870], 99.50th=[27919], 99.90th=[28443], 99.95th=[28443], 00:16:54.144 | 99.99th=[31851] 00:16:54.144 bw ( KiB/s): min=20480, max=20480, per=31.76%, avg=20480.00, stdev= 0.00, samples=2 00:16:54.144 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:54.144 lat (msec) : 2=0.02%, 4=0.54%, 10=27.34%, 20=67.45%, 50=4.65% 00:16:54.144 cpu : usr=2.79%, sys=6.47%, ctx=512, majf=0, minf=1 00:16:54.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.144 issued rwts: total=5030,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.144 00:16:54.144 Run status group 0 (all jobs): 00:16:54.144 READ: bw=59.4MiB/s (62.2MB/s), 11.5MiB/s-19.5MiB/s (12.1MB/s-20.5MB/s), io=60.2MiB (63.2MB), run=1005-1015msec 00:16:54.144 WRITE: bw=63.0MiB/s (66.0MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=63.9MiB (67.0MB), run=1005-1015msec 00:16:54.144 00:16:54.144 Disk stats (read/write): 00:16:54.144 nvme0n1: ios=3122/3563, merge=0/0, ticks=22994/23794, in_queue=46788, util=86.47% 00:16:54.144 nvme0n2: ios=2576/2570, merge=0/0, ticks=42910/53289, in_queue=96199, util=87.41% 00:16:54.144 nvme0n3: ios=2775/3072, merge=0/0, ticks=26469/26761, in_queue=53230, util=90.51% 00:16:54.144 nvme0n4: ios=4153/4140, merge=0/0, ticks=39569/42077, in_queue=81646, util=93.84% 00:16:54.144 12:18:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:54.144 [global] 00:16:54.144 thread=1 00:16:54.144 invalidate=1 00:16:54.144 rw=randwrite 00:16:54.144 time_based=1 00:16:54.144 runtime=1 00:16:54.144 ioengine=libaio 00:16:54.144 direct=1 00:16:54.144 bs=4096 00:16:54.144 iodepth=128 00:16:54.144 norandommap=0 00:16:54.144 numjobs=1 00:16:54.144 00:16:54.144 verify_dump=1 00:16:54.144 verify_backlog=512 00:16:54.144 verify_state_save=0 00:16:54.144 do_verify=1 00:16:54.144 verify=crc32c-intel 00:16:54.144 [job0] 00:16:54.144 filename=/dev/nvme0n1 00:16:54.144 [job1] 00:16:54.144 filename=/dev/nvme0n2 00:16:54.144 [job2] 00:16:54.144 filename=/dev/nvme0n3 00:16:54.144 [job3] 00:16:54.144 filename=/dev/nvme0n4 00:16:54.144 Could not set queue depth (nvme0n1) 00:16:54.144 Could not set queue depth (nvme0n2) 00:16:54.144 Could not set queue depth (nvme0n3) 00:16:54.144 Could not set queue depth (nvme0n4) 00:16:54.402 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.402 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.402 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.402 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.402 fio-3.35 00:16:54.402 Starting 4 threads 00:16:55.775 00:16:55.775 job0: (groupid=0, jobs=1): err= 0: pid=2119535: Wed May 15 12:18:24 2024 00:16:55.775 read: IOPS=4530, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1017msec) 00:16:55.775 slat (nsec): min=1690, max=14484k, avg=100307.38, stdev=693977.33 00:16:55.775 clat (usec): min=1437, max=54098, avg=13887.76, stdev=6105.68 00:16:55.775 lat (usec): min=1461, max=54104, avg=13988.06, stdev=6128.23 00:16:55.775 clat percentiles (usec): 00:16:55.775 | 1.00th=[ 5669], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 9634], 00:16:55.775 | 30.00th=[10552], 40.00th=[11207], 50.00th=[12518], 60.00th=[13304], 00:16:55.775 | 70.00th=[15008], 80.00th=[18482], 90.00th=[21627], 95.00th=[22938], 00:16:55.775 | 99.00th=[34341], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:16:55.775 | 99.99th=[54264] 00:16:55.775 write: IOPS=4673, BW=18.3MiB/s (19.1MB/s)(18.6MiB/1017msec); 0 zone resets 00:16:55.775 slat (usec): min=2, max=12382, avg=101.00, stdev=567.34 00:16:55.775 clat (usec): min=1476, max=35786, avg=13599.37, stdev=5176.08 00:16:55.775 lat (usec): min=1480, max=35796, avg=13700.37, stdev=5193.80 00:16:55.775 clat percentiles (usec): 00:16:55.775 | 1.00th=[ 3130], 5.00th=[ 5669], 10.00th=[ 7570], 20.00th=[ 9765], 00:16:55.775 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12649], 60.00th=[14353], 00:16:55.775 | 70.00th=[16188], 80.00th=[17695], 90.00th=[20055], 95.00th=[22152], 00:16:55.775 | 99.00th=[29492], 99.50th=[29754], 99.90th=[35390], 99.95th=[35390], 00:16:55.775 | 99.99th=[35914] 00:16:55.775 bw ( KiB/s): min=16792, max=20480, per=27.53%, avg=18636.00, stdev=2607.81, samples=2 00:16:55.775 iops : min= 4198, max= 5120, avg=4659.00, stdev=651.95, samples=2 00:16:55.775 lat (msec) : 2=0.18%, 4=0.82%, 10=23.31%, 20=63.66%, 50=11.67% 00:16:55.775 lat (msec) : 100=0.36% 00:16:55.775 cpu : usr=3.54%, sys=5.02%, ctx=605, majf=0, minf=1 00:16:55.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:55.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.775 issued rwts: total=4608,4753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.775 job1: (groupid=0, jobs=1): err= 0: pid=2119536: Wed May 15 12:18:24 2024 00:16:55.775 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:16:55.775 slat (nsec): min=1749, max=9572.7k, avg=89205.13, stdev=634326.91 00:16:55.775 clat (usec): min=2192, max=28938, avg=12771.76, stdev=3864.95 00:16:55.775 lat (usec): min=2379, max=28944, avg=12860.97, stdev=3890.46 00:16:55.775 clat percentiles (usec): 00:16:55.775 | 1.00th=[ 4621], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9765], 00:16:55.775 | 30.00th=[10421], 40.00th=[11338], 50.00th=[11863], 60.00th=[13042], 00:16:55.775 | 70.00th=[14091], 80.00th=[15795], 90.00th=[17957], 95.00th=[19530], 00:16:55.775 | 99.00th=[26084], 99.50th=[26084], 99.90th=[28967], 99.95th=[28967], 00:16:55.775 | 99.99th=[28967] 00:16:55.775 write: IOPS=4868, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1006msec); 0 zone resets 00:16:55.775 slat (usec): min=2, max=11212, avg=96.95, stdev=550.15 00:16:55.775 clat (usec): min=1059, max=28973, avg=14028.77, stdev=5299.94 00:16:55.775 lat (usec): min=1565, max=28998, avg=14125.72, stdev=5321.78 00:16:55.775 clat percentiles (usec): 00:16:55.775 | 1.00th=[ 3490], 5.00th=[ 6128], 10.00th=[ 7439], 20.00th=[ 8979], 00:16:55.775 | 30.00th=[10814], 40.00th=[12387], 50.00th=[13566], 60.00th=[15401], 00:16:55.775 | 70.00th=[16909], 80.00th=[18482], 90.00th=[21365], 95.00th=[22938], 00:16:55.775 | 99.00th=[27395], 99.50th=[27657], 99.90th=[28967], 99.95th=[28967], 00:16:55.775 | 99.99th=[28967] 00:16:55.775 bw ( KiB/s): min=18424, max=19736, per=28.19%, avg=19080.00, stdev=927.72, samples=2 00:16:55.775 iops : min= 4606, max= 4934, avg=4770.00, stdev=231.93, samples=2 00:16:55.775 lat (msec) : 2=0.09%, 4=0.86%, 10=23.96%, 20=65.09%, 50=9.99% 00:16:55.775 cpu : usr=2.79%, sys=6.57%, ctx=661, majf=0, minf=1 00:16:55.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:55.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.775 issued rwts: total=4608,4898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.775 job2: (groupid=0, jobs=1): err= 0: pid=2119537: Wed May 15 12:18:24 2024 00:16:55.775 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:16:55.775 slat (usec): min=2, max=33605, avg=135.76, stdev=1005.61 00:16:55.775 clat (usec): min=5424, max=44014, avg=17522.20, stdev=6957.39 00:16:55.775 lat (usec): min=5428, max=44018, avg=17657.96, stdev=6989.82 00:16:55.775 clat percentiles (usec): 00:16:55.775 | 1.00th=[ 5604], 5.00th=[10290], 10.00th=[10814], 20.00th=[11863], 00:16:55.775 | 30.00th=[12649], 40.00th=[13829], 50.00th=[16057], 60.00th=[18744], 00:16:55.775 | 70.00th=[20055], 80.00th=[22152], 90.00th=[25560], 95.00th=[33162], 00:16:55.775 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:16:55.775 | 99.99th=[43779] 00:16:55.775 write: IOPS=3590, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1006msec); 0 zone resets 00:16:55.775 slat (usec): min=3, max=17574, avg=134.30, stdev=830.16 00:16:55.775 clat (usec): min=1437, max=59271, avg=17806.12, stdev=6302.51 00:16:55.775 lat (usec): min=1448, max=59280, avg=17940.42, stdev=6347.83 00:16:55.775 clat percentiles (usec): 00:16:55.775 | 1.00th=[ 4883], 5.00th=[10814], 10.00th=[12256], 20.00th=[13566], 00:16:55.775 | 30.00th=[14484], 40.00th=[15795], 50.00th=[16450], 60.00th=[17433], 00:16:55.775 | 70.00th=[19268], 80.00th=[21890], 90.00th=[25560], 95.00th=[28705], 00:16:55.775 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46400], 99.95th=[52691], 00:16:55.775 | 99.99th=[59507] 00:16:55.775 bw ( KiB/s): min=12288, max=16384, per=21.18%, avg=14336.00, stdev=2896.31, samples=2 00:16:55.775 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:16:55.775 lat (msec) : 2=0.06%, 4=0.10%, 10=3.79%, 20=68.25%, 50=27.78% 00:16:55.775 lat (msec) : 100=0.03% 00:16:55.775 cpu : usr=1.89%, sys=3.98%, ctx=487, majf=0, minf=1 00:16:55.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:55.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.775 issued rwts: total=3584,3612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.775 job3: (groupid=0, jobs=1): err= 0: pid=2119538: Wed May 15 12:18:24 2024 00:16:55.775 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:16:55.775 slat (usec): min=2, max=19148, avg=123.75, stdev=820.44 00:16:55.775 clat (usec): min=8401, max=61168, avg=16971.43, stdev=7935.60 00:16:55.775 lat (usec): min=8413, max=61194, avg=17095.19, stdev=8006.01 00:16:55.775 clat percentiles (usec): 00:16:55.775 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[11076], 20.00th=[12125], 00:16:55.775 | 30.00th=[12649], 40.00th=[13566], 50.00th=[14615], 60.00th=[15139], 00:16:55.775 | 70.00th=[17433], 80.00th=[19530], 90.00th=[25560], 95.00th=[35914], 00:16:55.775 | 99.00th=[46924], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:16:55.775 | 99.99th=[61080] 00:16:55.775 write: IOPS=3922, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1006msec); 0 zone resets 00:16:55.775 slat (usec): min=2, max=11218, avg=132.16, stdev=633.63 00:16:55.775 clat (usec): min=4959, max=31968, avg=16524.15, stdev=4621.05 00:16:55.775 lat (usec): min=5532, max=32028, avg=16656.31, stdev=4652.34 00:16:55.775 clat percentiles (usec): 00:16:55.775 | 1.00th=[ 7701], 5.00th=[10421], 10.00th=[11469], 20.00th=[12649], 00:16:55.775 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15664], 60.00th=[16909], 00:16:55.775 | 70.00th=[18744], 80.00th=[20579], 90.00th=[22676], 95.00th=[25035], 00:16:55.775 | 99.00th=[29754], 99.50th=[30278], 99.90th=[30540], 99.95th=[31851], 00:16:55.775 | 99.99th=[31851] 00:16:55.775 bw ( KiB/s): min=12288, max=18264, per=22.57%, avg=15276.00, stdev=4225.67, samples=2 00:16:55.776 iops : min= 3072, max= 4566, avg=3819.00, stdev=1056.42, samples=2 00:16:55.776 lat (msec) : 10=2.10%, 20=77.28%, 50=20.19%, 100=0.44% 00:16:55.776 cpu : usr=2.99%, sys=5.37%, ctx=518, majf=0, minf=1 00:16:55.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:55.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.776 issued rwts: total=3584,3946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.776 00:16:55.776 Run status group 0 (all jobs): 00:16:55.776 READ: bw=62.9MiB/s (66.0MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=64.0MiB (67.1MB), run=1006-1017msec 00:16:55.776 WRITE: bw=66.1MiB/s (69.3MB/s), 14.0MiB/s-19.0MiB/s (14.7MB/s-19.9MB/s), io=67.2MiB (70.5MB), run=1006-1017msec 00:16:55.776 00:16:55.776 Disk stats (read/write): 00:16:55.776 nvme0n1: ios=3760/4096, merge=0/0, ticks=27098/32458, in_queue=59556, util=85.77% 00:16:55.776 nvme0n2: ios=3634/4096, merge=0/0, ticks=47165/51392, in_queue=98557, util=87.31% 00:16:55.776 nvme0n3: ios=2706/3072, merge=0/0, ticks=24077/28777, in_queue=52854, util=91.38% 00:16:55.776 nvme0n4: ios=3026/3072, merge=0/0, ticks=21991/18293, in_queue=40284, util=91.49% 00:16:55.776 12:18:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:55.776 12:18:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2119657 00:16:55.776 12:18:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:55.776 12:18:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:55.776 [global] 00:16:55.776 thread=1 00:16:55.776 invalidate=1 00:16:55.776 rw=read 00:16:55.776 time_based=1 00:16:55.776 runtime=10 00:16:55.776 ioengine=libaio 00:16:55.776 direct=1 00:16:55.776 bs=4096 00:16:55.776 iodepth=1 00:16:55.776 norandommap=1 00:16:55.776 numjobs=1 00:16:55.776 00:16:55.776 [job0] 00:16:55.776 filename=/dev/nvme0n1 00:16:55.776 [job1] 00:16:55.776 filename=/dev/nvme0n2 00:16:55.776 [job2] 00:16:55.776 filename=/dev/nvme0n3 00:16:55.776 [job3] 00:16:55.776 filename=/dev/nvme0n4 00:16:55.776 Could not set queue depth (nvme0n1) 00:16:55.776 Could not set queue depth (nvme0n2) 00:16:55.776 Could not set queue depth (nvme0n3) 00:16:55.776 Could not set queue depth (nvme0n4) 00:16:56.033 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.033 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.033 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.033 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.033 fio-3.35 00:16:56.033 Starting 4 threads 00:16:58.595 12:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:58.854 12:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:58.854 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=262144, buflen=4096 00:16:58.854 fio: pid=2119963, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:59.112 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=20434944, buflen=4096 00:16:59.112 fio: pid=2119962, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:59.112 12:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.112 12:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:59.370 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=18432000, buflen=4096 00:16:59.370 fio: pid=2119960, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:59.370 12:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.370 12:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:59.370 12:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.370 12:18:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:59.370 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=327680, buflen=4096 00:16:59.370 fio: pid=2119961, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:59.628 00:16:59.628 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2119960: Wed May 15 12:18:27 2024 00:16:59.628 read: IOPS=1490, BW=5960KiB/s (6103kB/s)(17.6MiB/3020msec) 00:16:59.628 slat (usec): min=8, max=20560, avg=14.22, stdev=306.33 00:16:59.628 clat (usec): min=470, max=42975, avg=650.21, stdev=1949.10 00:16:59.628 lat (usec): min=480, max=63014, avg=664.42, stdev=2067.94 00:16:59.628 clat percentiles (usec): 00:16:59.628 | 1.00th=[ 506], 5.00th=[ 523], 10.00th=[ 529], 20.00th=[ 537], 00:16:59.628 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 545], 60.00th=[ 553], 00:16:59.628 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 635], 00:16:59.628 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[42206], 99.95th=[42206], 00:16:59.628 | 99.99th=[42730] 00:16:59.628 bw ( KiB/s): min= 6456, max= 7200, per=57.70%, avg=6922.00, stdev=291.53, samples=5 00:16:59.628 iops : min= 1614, max= 1800, avg=1730.40, stdev=72.92, samples=5 00:16:59.628 lat (usec) : 500=0.64%, 750=97.00%, 1000=2.02% 00:16:59.628 lat (msec) : 2=0.09%, 50=0.22% 00:16:59.628 cpu : usr=1.09%, sys=2.55%, ctx=4504, majf=0, minf=1 00:16:59.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.628 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.628 issued rwts: total=4501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.628 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2119961: Wed May 15 12:18:27 2024 00:16:59.628 read: IOPS=25, BW=99.6KiB/s (102kB/s)(320KiB/3212msec) 00:16:59.628 slat (usec): min=10, max=8558, avg=207.69, stdev=1199.21 00:16:59.628 clat (usec): min=978, max=42985, avg=39922.43, stdev=8981.62 00:16:59.628 lat (usec): min=1002, max=49972, avg=40048.63, stdev=9048.10 00:16:59.628 clat percentiles (usec): 00:16:59.628 | 1.00th=[ 979], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41681], 00:16:59.628 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:59.628 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:59.628 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:59.628 | 99.99th=[42730] 00:16:59.628 bw ( KiB/s): min= 91, max= 112, per=0.83%, avg=99.17, stdev= 7.55, samples=6 00:16:59.628 iops : min= 22, max= 28, avg=24.67, stdev= 2.07, samples=6 00:16:59.628 lat (usec) : 1000=2.47% 00:16:59.628 lat (msec) : 2=2.47%, 50=93.83% 00:16:59.628 cpu : usr=0.00%, sys=0.25%, ctx=84, majf=0, minf=1 00:16:59.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.628 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.628 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.628 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2119962: Wed May 15 12:18:27 2024 00:16:59.628 read: IOPS=1772, BW=7087KiB/s (7257kB/s)(19.5MiB/2816msec) 00:16:59.628 slat (nsec): min=8701, max=43237, avg=9600.88, stdev=1746.98 00:16:59.628 clat (usec): min=418, max=1404, avg=548.76, stdev=48.91 00:16:59.628 lat (usec): min=428, max=1413, avg=558.36, stdev=48.98 00:16:59.628 clat percentiles (usec): 00:16:59.628 | 1.00th=[ 461], 5.00th=[ 490], 10.00th=[ 515], 20.00th=[ 529], 00:16:59.628 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 545], 60.00th=[ 553], 00:16:59.628 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 570], 95.00th=[ 578], 00:16:59.628 | 99.00th=[ 807], 99.50th=[ 889], 99.90th=[ 979], 99.95th=[ 1012], 00:16:59.628 | 99.99th=[ 1401] 00:16:59.628 bw ( KiB/s): min= 6888, max= 7288, per=59.14%, avg=7094.40, stdev=187.65, samples=5 00:16:59.628 iops : min= 1722, max= 1822, avg=1773.60, stdev=46.91, samples=5 00:16:59.628 lat (usec) : 500=6.69%, 750=91.82%, 1000=1.40% 00:16:59.628 lat (msec) : 2=0.06% 00:16:59.628 cpu : usr=1.53%, sys=2.81%, ctx=4990, majf=0, minf=1 00:16:59.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.628 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.628 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.628 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2119963: Wed May 15 12:18:27 2024 00:16:59.628 read: IOPS=24, BW=96.0KiB/s (98.3kB/s)(256KiB/2668msec) 00:16:59.628 slat (nsec): min=9676, max=32215, avg=20931.38, stdev=6051.43 00:16:59.628 clat (usec): min=1121, max=43009, avg=41259.07, stdev=5111.86 00:16:59.628 lat (usec): min=1153, max=43034, avg=41280.08, stdev=5110.54 00:16:59.628 clat percentiles (usec): 00:16:59.628 | 1.00th=[ 1123], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:59.628 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:59.628 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:59.628 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:59.628 | 99.99th=[43254] 00:16:59.628 bw ( KiB/s): min= 96, max= 96, per=0.80%, avg=96.00, stdev= 0.00, samples=5 00:16:59.628 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:16:59.628 lat (msec) : 2=1.54%, 50=96.92% 00:16:59.628 cpu : usr=0.00%, sys=0.07%, ctx=65, majf=0, minf=2 00:16:59.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.628 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.628 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.628 00:16:59.628 Run status group 0 (all jobs): 00:16:59.628 READ: bw=11.7MiB/s (12.3MB/s), 96.0KiB/s-7087KiB/s (98.3kB/s-7257kB/s), io=37.6MiB (39.5MB), run=2668-3212msec 00:16:59.628 00:16:59.628 Disk stats (read/write): 00:16:59.628 nvme0n1: ios=4496/0, merge=0/0, ticks=2709/0, in_queue=2709, util=93.82% 00:16:59.628 nvme0n2: ios=76/0, merge=0/0, ticks=3029/0, in_queue=3029, util=95.30% 00:16:59.628 nvme0n3: ios=4567/0, merge=0/0, ticks=2474/0, in_queue=2474, util=95.97% 00:16:59.628 nvme0n4: ios=62/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.40% 00:16:59.628 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.628 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:59.886 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.886 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:00.143 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.143 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:00.143 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.143 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:00.401 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:00.401 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2119657 00:17:00.401 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:00.401 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.401 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.401 12:18:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:17:00.401 12:18:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:17:00.401 12:18:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.659 12:18:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:17:00.659 12:18:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.659 12:18:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:17:00.659 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:00.659 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:00.659 nvmf hotplug test: fio failed as expected 00:17:00.659 12:18:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.659 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.659 rmmod nvme_tcp 00:17:00.659 rmmod nvme_fabrics 00:17:00.917 rmmod nvme_keyring 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2116712 ']' 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2116712 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 2116712 ']' 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 2116712 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2116712 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2116712' 00:17:00.917 killing process with pid 2116712 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 2116712 00:17:00.917 [2024-05-15 12:18:29.277147] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:00.917 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 2116712 00:17:01.176 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.176 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.176 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.176 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.176 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.176 12:18:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.176 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.176 12:18:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.078 12:18:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.078 00:17:03.078 real 0m28.245s 00:17:03.078 user 2m2.655s 00:17:03.078 sys 0m9.917s 00:17:03.078 12:18:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:03.078 12:18:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.078 ************************************ 00:17:03.078 END TEST nvmf_fio_target 00:17:03.078 ************************************ 00:17:03.078 12:18:31 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:03.078 12:18:31 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:03.078 12:18:31 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:03.078 12:18:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:03.336 ************************************ 00:17:03.336 START TEST nvmf_bdevio 00:17:03.336 ************************************ 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:03.336 * Looking for test storage... 00:17:03.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.336 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.337 12:18:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:09.895 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:09.895 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:09.895 Found net devices under 0000:af:00.0: cvl_0_0 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:09.895 Found net devices under 0000:af:00.1: cvl_0_1 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.895 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:17:10.154 00:17:10.154 --- 10.0.0.2 ping statistics --- 00:17:10.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.154 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:17:10.154 00:17:10.154 --- 10.0.0.1 ping statistics --- 00:17:10.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.154 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2124484 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2124484 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 2124484 ']' 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:10.154 12:18:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:10.154 [2024-05-15 12:18:38.616208] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:17:10.154 [2024-05-15 12:18:38.616254] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.154 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.412 [2024-05-15 12:18:38.689566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.412 [2024-05-15 12:18:38.761962] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.412 [2024-05-15 12:18:38.761995] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.412 [2024-05-15 12:18:38.762004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.412 [2024-05-15 12:18:38.762012] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.412 [2024-05-15 12:18:38.762018] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.412 [2024-05-15 12:18:38.762134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:10.412 [2024-05-15 12:18:38.762262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:10.412 [2024-05-15 12:18:38.762372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.412 [2024-05-15 12:18:38.762372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:10.977 [2024-05-15 12:18:39.456017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:10.977 Malloc0 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:10.977 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:11.234 [2024-05-15 12:18:39.510332] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:11.234 [2024-05-15 12:18:39.510574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:11.234 { 00:17:11.234 "params": { 00:17:11.234 "name": "Nvme$subsystem", 00:17:11.234 "trtype": "$TEST_TRANSPORT", 00:17:11.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.234 "adrfam": "ipv4", 00:17:11.234 "trsvcid": "$NVMF_PORT", 00:17:11.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.234 "hdgst": ${hdgst:-false}, 00:17:11.234 "ddgst": ${ddgst:-false} 00:17:11.234 }, 00:17:11.234 "method": "bdev_nvme_attach_controller" 00:17:11.234 } 00:17:11.234 EOF 00:17:11.234 )") 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:11.234 12:18:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:11.234 "params": { 00:17:11.234 "name": "Nvme1", 00:17:11.234 "trtype": "tcp", 00:17:11.234 "traddr": "10.0.0.2", 00:17:11.234 "adrfam": "ipv4", 00:17:11.234 "trsvcid": "4420", 00:17:11.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.234 "hdgst": false, 00:17:11.234 "ddgst": false 00:17:11.234 }, 00:17:11.234 "method": "bdev_nvme_attach_controller" 00:17:11.234 }' 00:17:11.234 [2024-05-15 12:18:39.562116] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:17:11.234 [2024-05-15 12:18:39.562163] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124524 ] 00:17:11.234 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.234 [2024-05-15 12:18:39.633225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.234 [2024-05-15 12:18:39.705630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.234 [2024-05-15 12:18:39.705725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.234 [2024-05-15 12:18:39.705728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.492 I/O targets: 00:17:11.492 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:11.492 00:17:11.492 00:17:11.492 CUnit - A unit testing framework for C - Version 2.1-3 00:17:11.492 http://cunit.sourceforge.net/ 00:17:11.492 00:17:11.492 00:17:11.492 Suite: bdevio tests on: Nvme1n1 00:17:11.492 Test: blockdev write read block ...passed 00:17:11.492 Test: blockdev write zeroes read block ...passed 00:17:11.492 Test: blockdev write zeroes read no split ...passed 00:17:11.751 Test: blockdev write zeroes read split ...passed 00:17:11.752 Test: blockdev write zeroes read split partial ...passed 00:17:11.752 Test: blockdev reset ...[2024-05-15 12:18:40.137277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:11.752 [2024-05-15 12:18:40.137345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18837b0 (9): Bad file descriptor 00:17:11.752 [2024-05-15 12:18:40.148986] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:11.752 passed 00:17:11.752 Test: blockdev write read 8 blocks ...passed 00:17:11.752 Test: blockdev write read size > 128k ...passed 00:17:11.752 Test: blockdev write read invalid size ...passed 00:17:11.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:11.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:11.752 Test: blockdev write read max offset ...passed 00:17:12.011 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:12.011 Test: blockdev writev readv 8 blocks ...passed 00:17:12.011 Test: blockdev writev readv 30 x 1block ...passed 00:17:12.011 Test: blockdev writev readv block ...passed 00:17:12.011 Test: blockdev writev readv size > 128k ...passed 00:17:12.011 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:12.011 Test: blockdev comparev and writev ...[2024-05-15 12:18:40.337292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.011 [2024-05-15 12:18:40.337322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.337339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.011 [2024-05-15 12:18:40.337355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.337799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.011 [2024-05-15 12:18:40.337812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.337826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.011 [2024-05-15 12:18:40.337837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.338268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.011 [2024-05-15 12:18:40.338282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.338297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.011 [2024-05-15 12:18:40.338307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.338761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.011 [2024-05-15 12:18:40.338775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.338789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:12.011 [2024-05-15 12:18:40.338799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:12.011 passed 00:17:12.011 Test: blockdev nvme passthru rw ...passed 00:17:12.011 Test: blockdev nvme passthru vendor specific ...[2024-05-15 12:18:40.421971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.011 [2024-05-15 12:18:40.421987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.422301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.011 [2024-05-15 12:18:40.422313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:12.011 [2024-05-15 12:18:40.422624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.012 [2024-05-15 12:18:40.422636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:12.012 [2024-05-15 12:18:40.422950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:12.012 [2024-05-15 12:18:40.422963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:12.012 passed 00:17:12.012 Test: blockdev nvme admin passthru ...passed 00:17:12.012 Test: blockdev copy ...passed 00:17:12.012 00:17:12.012 Run Summary: Type Total Ran Passed Failed Inactive 00:17:12.012 suites 1 1 n/a 0 0 00:17:12.012 tests 23 23 23 0 0 00:17:12.012 asserts 152 152 152 0 n/a 00:17:12.012 00:17:12.012 Elapsed time = 1.161 seconds 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.271 rmmod nvme_tcp 00:17:12.271 rmmod nvme_fabrics 00:17:12.271 rmmod nvme_keyring 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2124484 ']' 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2124484 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 2124484 ']' 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 2124484 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2124484 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2124484' 00:17:12.271 killing process with pid 2124484 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 2124484 00:17:12.271 [2024-05-15 12:18:40.781823] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:12.271 12:18:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 2124484 00:17:12.531 12:18:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.531 12:18:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.531 12:18:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.531 12:18:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.531 12:18:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.531 12:18:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.531 12:18:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.531 12:18:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.068 12:18:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.068 00:17:15.068 real 0m11.469s 00:17:15.068 user 0m12.688s 00:17:15.068 sys 0m5.917s 00:17:15.068 12:18:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:15.068 12:18:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:15.068 ************************************ 00:17:15.068 END TEST nvmf_bdevio 00:17:15.068 ************************************ 00:17:15.068 12:18:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:15.068 12:18:43 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:15.068 12:18:43 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:15.068 12:18:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.068 ************************************ 00:17:15.068 START TEST nvmf_auth_target 00:17:15.068 ************************************ 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:15.068 * Looking for test storage... 00:17:15.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.068 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.069 12:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:21.694 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:21.694 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:21.694 Found net devices under 0000:af:00.0: cvl_0_0 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:21.694 Found net devices under 0000:af:00.1: cvl_0_1 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.694 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.695 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:21.695 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:21.695 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.695 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.695 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.695 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.695 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:21.695 12:18:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:21.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:17:21.695 00:17:21.695 --- 10.0.0.2 ping statistics --- 00:17:21.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.695 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:17:21.695 00:17:21.695 --- 10.0.0.1 ping statistics --- 00:17:21.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.695 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2128475 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2128475 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2128475 ']' 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:21.695 12:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=2128751 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=38870e4dba70516d27af10c1d515c092df238fcb94a2c561 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V9i 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 38870e4dba70516d27af10c1d515c092df238fcb94a2c561 0 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 38870e4dba70516d27af10c1d515c092df238fcb94a2c561 0 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=38870e4dba70516d27af10c1d515c092df238fcb94a2c561 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V9i 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V9i 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.V9i 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b54640880cc2e8b5522150671f8378bc 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GFx 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b54640880cc2e8b5522150671f8378bc 1 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b54640880cc2e8b5522150671f8378bc 1 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b54640880cc2e8b5522150671f8378bc 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:22.633 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GFx 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GFx 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.GFx 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=443c2294d3b4c7f3f58bc395fb5e56042c737eeae95f7ad9 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Fvp 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 443c2294d3b4c7f3f58bc395fb5e56042c737eeae95f7ad9 2 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 443c2294d3b4c7f3f58bc395fb5e56042c737eeae95f7ad9 2 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=443c2294d3b4c7f3f58bc395fb5e56042c737eeae95f7ad9 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Fvp 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Fvp 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.Fvp 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=82bc4e2da1827e9a605b6a068ab46992b122911806e7c63abe13e8d2a8a6673c 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Vu0 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 82bc4e2da1827e9a605b6a068ab46992b122911806e7c63abe13e8d2a8a6673c 3 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 82bc4e2da1827e9a605b6a068ab46992b122911806e7c63abe13e8d2a8a6673c 3 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=82bc4e2da1827e9a605b6a068ab46992b122911806e7c63abe13e8d2a8a6673c 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Vu0 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Vu0 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.Vu0 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 2128475 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2128475 ']' 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:22.893 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 2128751 /var/tmp/host.sock 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2128751 ']' 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:23.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.V9i 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.152 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.V9i 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.V9i 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GFx 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.GFx 00:17:23.411 12:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.GFx 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Fvp 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Fvp 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Fvp 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Vu0 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Vu0 00:17:23.671 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Vu0 00:17:23.930 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:23.931 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.931 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:23.931 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:23.931 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:24.190 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:24.450 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:24.450 { 00:17:24.450 "cntlid": 1, 00:17:24.450 "qid": 0, 00:17:24.450 "state": "enabled", 00:17:24.450 "listen_address": { 00:17:24.450 "trtype": "TCP", 00:17:24.450 "adrfam": "IPv4", 00:17:24.450 "traddr": "10.0.0.2", 00:17:24.450 "trsvcid": "4420" 00:17:24.450 }, 00:17:24.450 "peer_address": { 00:17:24.450 "trtype": "TCP", 00:17:24.450 "adrfam": "IPv4", 00:17:24.450 "traddr": "10.0.0.1", 00:17:24.450 "trsvcid": "56628" 00:17:24.450 }, 00:17:24.450 "auth": { 00:17:24.450 "state": "completed", 00:17:24.450 "digest": "sha256", 00:17:24.450 "dhgroup": "null" 00:17:24.450 } 00:17:24.450 } 00:17:24.450 ]' 00:17:24.450 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:24.709 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.709 12:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:24.709 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:24.709 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:24.709 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.709 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.709 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.967 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:25.536 12:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:25.795 00:17:25.795 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:25.795 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:25.795 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:26.055 { 00:17:26.055 "cntlid": 3, 00:17:26.055 "qid": 0, 00:17:26.055 "state": "enabled", 00:17:26.055 "listen_address": { 00:17:26.055 "trtype": "TCP", 00:17:26.055 "adrfam": "IPv4", 00:17:26.055 "traddr": "10.0.0.2", 00:17:26.055 "trsvcid": "4420" 00:17:26.055 }, 00:17:26.055 "peer_address": { 00:17:26.055 "trtype": "TCP", 00:17:26.055 "adrfam": "IPv4", 00:17:26.055 "traddr": "10.0.0.1", 00:17:26.055 "trsvcid": "56652" 00:17:26.055 }, 00:17:26.055 "auth": { 00:17:26.055 "state": "completed", 00:17:26.055 "digest": "sha256", 00:17:26.055 "dhgroup": "null" 00:17:26.055 } 00:17:26.055 } 00:17:26.055 ]' 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.055 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.314 12:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:17:26.883 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.883 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:26.883 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.883 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.883 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.883 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:26.883 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:26.883 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:27.142 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:27.143 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:27.143 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:27.401 { 00:17:27.401 "cntlid": 5, 00:17:27.401 "qid": 0, 00:17:27.401 "state": "enabled", 00:17:27.401 "listen_address": { 00:17:27.401 "trtype": "TCP", 00:17:27.401 "adrfam": "IPv4", 00:17:27.401 "traddr": "10.0.0.2", 00:17:27.401 "trsvcid": "4420" 00:17:27.401 }, 00:17:27.401 "peer_address": { 00:17:27.401 "trtype": "TCP", 00:17:27.401 "adrfam": "IPv4", 00:17:27.401 "traddr": "10.0.0.1", 00:17:27.401 "trsvcid": "56670" 00:17:27.401 }, 00:17:27.401 "auth": { 00:17:27.401 "state": "completed", 00:17:27.401 "digest": "sha256", 00:17:27.401 "dhgroup": "null" 00:17:27.401 } 00:17:27.401 } 00:17:27.401 ]' 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:27.401 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:27.660 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.660 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.660 12:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.660 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:17:28.229 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.229 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:28.229 12:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.229 12:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.229 12:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.229 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:28.229 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:28.229 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.489 12:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.748 00:17:28.748 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:28.748 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:28.748 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.007 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.007 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.007 12:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.007 12:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.007 12:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.007 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:29.007 { 00:17:29.007 "cntlid": 7, 00:17:29.008 "qid": 0, 00:17:29.008 "state": "enabled", 00:17:29.008 "listen_address": { 00:17:29.008 "trtype": "TCP", 00:17:29.008 "adrfam": "IPv4", 00:17:29.008 "traddr": "10.0.0.2", 00:17:29.008 "trsvcid": "4420" 00:17:29.008 }, 00:17:29.008 "peer_address": { 00:17:29.008 "trtype": "TCP", 00:17:29.008 "adrfam": "IPv4", 00:17:29.008 "traddr": "10.0.0.1", 00:17:29.008 "trsvcid": "56698" 00:17:29.008 }, 00:17:29.008 "auth": { 00:17:29.008 "state": "completed", 00:17:29.008 "digest": "sha256", 00:17:29.008 "dhgroup": "null" 00:17:29.008 } 00:17:29.008 } 00:17:29.008 ]' 00:17:29.008 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:29.008 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.008 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:29.008 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:29.008 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:29.008 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.008 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.008 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.267 12:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:29.836 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:30.095 00:17:30.095 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:30.095 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:30.095 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:30.354 { 00:17:30.354 "cntlid": 9, 00:17:30.354 "qid": 0, 00:17:30.354 "state": "enabled", 00:17:30.354 "listen_address": { 00:17:30.354 "trtype": "TCP", 00:17:30.354 "adrfam": "IPv4", 00:17:30.354 "traddr": "10.0.0.2", 00:17:30.354 "trsvcid": "4420" 00:17:30.354 }, 00:17:30.354 "peer_address": { 00:17:30.354 "trtype": "TCP", 00:17:30.354 "adrfam": "IPv4", 00:17:30.354 "traddr": "10.0.0.1", 00:17:30.354 "trsvcid": "46408" 00:17:30.354 }, 00:17:30.354 "auth": { 00:17:30.354 "state": "completed", 00:17:30.354 "digest": "sha256", 00:17:30.354 "dhgroup": "ffdhe2048" 00:17:30.354 } 00:17:30.354 } 00:17:30.354 ]' 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.354 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:30.613 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.613 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.613 12:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.613 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:17:31.182 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.182 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:31.182 12:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.182 12:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.182 12:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.182 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:31.182 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:31.182 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:31.441 12:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:31.700 00:17:31.700 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:31.700 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:31.700 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:31.959 { 00:17:31.959 "cntlid": 11, 00:17:31.959 "qid": 0, 00:17:31.959 "state": "enabled", 00:17:31.959 "listen_address": { 00:17:31.959 "trtype": "TCP", 00:17:31.959 "adrfam": "IPv4", 00:17:31.959 "traddr": "10.0.0.2", 00:17:31.959 "trsvcid": "4420" 00:17:31.959 }, 00:17:31.959 "peer_address": { 00:17:31.959 "trtype": "TCP", 00:17:31.959 "adrfam": "IPv4", 00:17:31.959 "traddr": "10.0.0.1", 00:17:31.959 "trsvcid": "46432" 00:17:31.959 }, 00:17:31.959 "auth": { 00:17:31.959 "state": "completed", 00:17:31.959 "digest": "sha256", 00:17:31.959 "dhgroup": "ffdhe2048" 00:17:31.959 } 00:17:31.959 } 00:17:31.959 ]' 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.959 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.218 12:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:32.822 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:33.081 00:17:33.081 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:33.081 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:33.081 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:33.340 { 00:17:33.340 "cntlid": 13, 00:17:33.340 "qid": 0, 00:17:33.340 "state": "enabled", 00:17:33.340 "listen_address": { 00:17:33.340 "trtype": "TCP", 00:17:33.340 "adrfam": "IPv4", 00:17:33.340 "traddr": "10.0.0.2", 00:17:33.340 "trsvcid": "4420" 00:17:33.340 }, 00:17:33.340 "peer_address": { 00:17:33.340 "trtype": "TCP", 00:17:33.340 "adrfam": "IPv4", 00:17:33.340 "traddr": "10.0.0.1", 00:17:33.340 "trsvcid": "46462" 00:17:33.340 }, 00:17:33.340 "auth": { 00:17:33.340 "state": "completed", 00:17:33.340 "digest": "sha256", 00:17:33.340 "dhgroup": "ffdhe2048" 00:17:33.340 } 00:17:33.340 } 00:17:33.340 ]' 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.340 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:33.599 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.599 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.599 12:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.600 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:17:34.168 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.168 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:34.168 12:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.168 12:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.168 12:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.168 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:34.168 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.168 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.428 12:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.687 00:17:34.687 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:34.687 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:34.687 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.946 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.946 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.946 12:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.946 12:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.946 12:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.946 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:34.946 { 00:17:34.946 "cntlid": 15, 00:17:34.947 "qid": 0, 00:17:34.947 "state": "enabled", 00:17:34.947 "listen_address": { 00:17:34.947 "trtype": "TCP", 00:17:34.947 "adrfam": "IPv4", 00:17:34.947 "traddr": "10.0.0.2", 00:17:34.947 "trsvcid": "4420" 00:17:34.947 }, 00:17:34.947 "peer_address": { 00:17:34.947 "trtype": "TCP", 00:17:34.947 "adrfam": "IPv4", 00:17:34.947 "traddr": "10.0.0.1", 00:17:34.947 "trsvcid": "46496" 00:17:34.947 }, 00:17:34.947 "auth": { 00:17:34.947 "state": "completed", 00:17:34.947 "digest": "sha256", 00:17:34.947 "dhgroup": "ffdhe2048" 00:17:34.947 } 00:17:34.947 } 00:17:34.947 ]' 00:17:34.947 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:34.947 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.947 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:34.947 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.947 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:34.947 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.947 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.947 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.206 12:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.773 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:36.031 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:36.031 00:17:36.289 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:36.289 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:36.289 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.289 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.289 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.289 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.290 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.290 12:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.290 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:36.290 { 00:17:36.290 "cntlid": 17, 00:17:36.290 "qid": 0, 00:17:36.290 "state": "enabled", 00:17:36.290 "listen_address": { 00:17:36.290 "trtype": "TCP", 00:17:36.290 "adrfam": "IPv4", 00:17:36.290 "traddr": "10.0.0.2", 00:17:36.290 "trsvcid": "4420" 00:17:36.290 }, 00:17:36.290 "peer_address": { 00:17:36.290 "trtype": "TCP", 00:17:36.290 "adrfam": "IPv4", 00:17:36.290 "traddr": "10.0.0.1", 00:17:36.290 "trsvcid": "46518" 00:17:36.290 }, 00:17:36.290 "auth": { 00:17:36.290 "state": "completed", 00:17:36.290 "digest": "sha256", 00:17:36.290 "dhgroup": "ffdhe3072" 00:17:36.290 } 00:17:36.290 } 00:17:36.290 ]' 00:17:36.290 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:36.290 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.290 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:36.788 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.788 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:36.788 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.788 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.788 12:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.788 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:37.353 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.354 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:37.354 12:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.354 12:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.354 12:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.354 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:37.354 12:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:37.612 00:17:37.612 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:37.612 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:37.612 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:37.872 { 00:17:37.872 "cntlid": 19, 00:17:37.872 "qid": 0, 00:17:37.872 "state": "enabled", 00:17:37.872 "listen_address": { 00:17:37.872 "trtype": "TCP", 00:17:37.872 "adrfam": "IPv4", 00:17:37.872 "traddr": "10.0.0.2", 00:17:37.872 "trsvcid": "4420" 00:17:37.872 }, 00:17:37.872 "peer_address": { 00:17:37.872 "trtype": "TCP", 00:17:37.872 "adrfam": "IPv4", 00:17:37.872 "traddr": "10.0.0.1", 00:17:37.872 "trsvcid": "46556" 00:17:37.872 }, 00:17:37.872 "auth": { 00:17:37.872 "state": "completed", 00:17:37.872 "digest": "sha256", 00:17:37.872 "dhgroup": "ffdhe3072" 00:17:37.872 } 00:17:37.872 } 00:17:37.872 ]' 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.872 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.131 12:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:17:38.698 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.698 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:38.698 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.698 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.698 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.698 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:38.698 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:38.698 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:38.957 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:39.216 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:39.216 { 00:17:39.216 "cntlid": 21, 00:17:39.216 "qid": 0, 00:17:39.216 "state": "enabled", 00:17:39.216 "listen_address": { 00:17:39.216 "trtype": "TCP", 00:17:39.216 "adrfam": "IPv4", 00:17:39.216 "traddr": "10.0.0.2", 00:17:39.216 "trsvcid": "4420" 00:17:39.216 }, 00:17:39.216 "peer_address": { 00:17:39.216 "trtype": "TCP", 00:17:39.216 "adrfam": "IPv4", 00:17:39.216 "traddr": "10.0.0.1", 00:17:39.216 "trsvcid": "46576" 00:17:39.216 }, 00:17:39.216 "auth": { 00:17:39.216 "state": "completed", 00:17:39.216 "digest": "sha256", 00:17:39.216 "dhgroup": "ffdhe3072" 00:17:39.216 } 00:17:39.216 } 00:17:39.216 ]' 00:17:39.216 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:39.474 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.474 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:39.474 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.474 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:39.474 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.474 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.474 12:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.732 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.299 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.557 00:17:40.557 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:40.557 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:40.557 12:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:40.816 { 00:17:40.816 "cntlid": 23, 00:17:40.816 "qid": 0, 00:17:40.816 "state": "enabled", 00:17:40.816 "listen_address": { 00:17:40.816 "trtype": "TCP", 00:17:40.816 "adrfam": "IPv4", 00:17:40.816 "traddr": "10.0.0.2", 00:17:40.816 "trsvcid": "4420" 00:17:40.816 }, 00:17:40.816 "peer_address": { 00:17:40.816 "trtype": "TCP", 00:17:40.816 "adrfam": "IPv4", 00:17:40.816 "traddr": "10.0.0.1", 00:17:40.816 "trsvcid": "46618" 00:17:40.816 }, 00:17:40.816 "auth": { 00:17:40.816 "state": "completed", 00:17:40.816 "digest": "sha256", 00:17:40.816 "dhgroup": "ffdhe3072" 00:17:40.816 } 00:17:40.816 } 00:17:40.816 ]' 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.816 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.074 12:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.640 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:41.898 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:42.156 00:17:42.156 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:42.156 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.156 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:42.156 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.156 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.156 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.156 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:42.415 { 00:17:42.415 "cntlid": 25, 00:17:42.415 "qid": 0, 00:17:42.415 "state": "enabled", 00:17:42.415 "listen_address": { 00:17:42.415 "trtype": "TCP", 00:17:42.415 "adrfam": "IPv4", 00:17:42.415 "traddr": "10.0.0.2", 00:17:42.415 "trsvcid": "4420" 00:17:42.415 }, 00:17:42.415 "peer_address": { 00:17:42.415 "trtype": "TCP", 00:17:42.415 "adrfam": "IPv4", 00:17:42.415 "traddr": "10.0.0.1", 00:17:42.415 "trsvcid": "46644" 00:17:42.415 }, 00:17:42.415 "auth": { 00:17:42.415 "state": "completed", 00:17:42.415 "digest": "sha256", 00:17:42.415 "dhgroup": "ffdhe4096" 00:17:42.415 } 00:17:42.415 } 00:17:42.415 ]' 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.415 12:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.672 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:17:43.237 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.237 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:43.237 12:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.237 12:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.237 12:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.237 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:43.238 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:43.495 00:17:43.495 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:43.495 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:43.495 12:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:43.754 { 00:17:43.754 "cntlid": 27, 00:17:43.754 "qid": 0, 00:17:43.754 "state": "enabled", 00:17:43.754 "listen_address": { 00:17:43.754 "trtype": "TCP", 00:17:43.754 "adrfam": "IPv4", 00:17:43.754 "traddr": "10.0.0.2", 00:17:43.754 "trsvcid": "4420" 00:17:43.754 }, 00:17:43.754 "peer_address": { 00:17:43.754 "trtype": "TCP", 00:17:43.754 "adrfam": "IPv4", 00:17:43.754 "traddr": "10.0.0.1", 00:17:43.754 "trsvcid": "46670" 00:17:43.754 }, 00:17:43.754 "auth": { 00:17:43.754 "state": "completed", 00:17:43.754 "digest": "sha256", 00:17:43.754 "dhgroup": "ffdhe4096" 00:17:43.754 } 00:17:43.754 } 00:17:43.754 ]' 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.754 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:44.012 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.012 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.012 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.012 12:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:17:44.579 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.579 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:44.579 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.579 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.579 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:44.579 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.579 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:44.837 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:45.095 00:17:45.096 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:45.096 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:45.096 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:45.354 { 00:17:45.354 "cntlid": 29, 00:17:45.354 "qid": 0, 00:17:45.354 "state": "enabled", 00:17:45.354 "listen_address": { 00:17:45.354 "trtype": "TCP", 00:17:45.354 "adrfam": "IPv4", 00:17:45.354 "traddr": "10.0.0.2", 00:17:45.354 "trsvcid": "4420" 00:17:45.354 }, 00:17:45.354 "peer_address": { 00:17:45.354 "trtype": "TCP", 00:17:45.354 "adrfam": "IPv4", 00:17:45.354 "traddr": "10.0.0.1", 00:17:45.354 "trsvcid": "46690" 00:17:45.354 }, 00:17:45.354 "auth": { 00:17:45.354 "state": "completed", 00:17:45.354 "digest": "sha256", 00:17:45.354 "dhgroup": "ffdhe4096" 00:17:45.354 } 00:17:45.354 } 00:17:45.354 ]' 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.354 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.613 12:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:17:46.214 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.214 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:46.214 12:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.214 12:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.214 12:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.214 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:46.214 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:46.214 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.472 12:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.730 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:46.730 { 00:17:46.730 "cntlid": 31, 00:17:46.730 "qid": 0, 00:17:46.730 "state": "enabled", 00:17:46.730 "listen_address": { 00:17:46.730 "trtype": "TCP", 00:17:46.730 "adrfam": "IPv4", 00:17:46.730 "traddr": "10.0.0.2", 00:17:46.730 "trsvcid": "4420" 00:17:46.730 }, 00:17:46.730 "peer_address": { 00:17:46.730 "trtype": "TCP", 00:17:46.730 "adrfam": "IPv4", 00:17:46.730 "traddr": "10.0.0.1", 00:17:46.730 "trsvcid": "46726" 00:17:46.730 }, 00:17:46.730 "auth": { 00:17:46.730 "state": "completed", 00:17:46.730 "digest": "sha256", 00:17:46.730 "dhgroup": "ffdhe4096" 00:17:46.730 } 00:17:46.730 } 00:17:46.730 ]' 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.730 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:46.989 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.989 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:46.989 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.989 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.989 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.989 12:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:17:47.554 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.555 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:47.555 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.555 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.555 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.555 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.555 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:47.555 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.555 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.812 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:17:47.812 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:47.812 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.812 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.812 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.812 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:47.813 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.813 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.813 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.813 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:47.813 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:48.070 00:17:48.070 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:48.070 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:48.070 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:48.328 { 00:17:48.328 "cntlid": 33, 00:17:48.328 "qid": 0, 00:17:48.328 "state": "enabled", 00:17:48.328 "listen_address": { 00:17:48.328 "trtype": "TCP", 00:17:48.328 "adrfam": "IPv4", 00:17:48.328 "traddr": "10.0.0.2", 00:17:48.328 "trsvcid": "4420" 00:17:48.328 }, 00:17:48.328 "peer_address": { 00:17:48.328 "trtype": "TCP", 00:17:48.328 "adrfam": "IPv4", 00:17:48.328 "traddr": "10.0.0.1", 00:17:48.328 "trsvcid": "46746" 00:17:48.328 }, 00:17:48.328 "auth": { 00:17:48.328 "state": "completed", 00:17:48.328 "digest": "sha256", 00:17:48.328 "dhgroup": "ffdhe6144" 00:17:48.328 } 00:17:48.328 } 00:17:48.328 ]' 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.328 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:48.594 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.594 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.594 12:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.594 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:17:49.162 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.162 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:49.162 12:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.162 12:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.162 12:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.162 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:49.162 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.162 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.419 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:17:49.419 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:49.419 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.419 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:49.419 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.419 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:49.420 12:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.420 12:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.420 12:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.420 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:49.420 12:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:49.677 00:17:49.677 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:49.677 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:49.677 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:49.934 { 00:17:49.934 "cntlid": 35, 00:17:49.934 "qid": 0, 00:17:49.934 "state": "enabled", 00:17:49.934 "listen_address": { 00:17:49.934 "trtype": "TCP", 00:17:49.934 "adrfam": "IPv4", 00:17:49.934 "traddr": "10.0.0.2", 00:17:49.934 "trsvcid": "4420" 00:17:49.934 }, 00:17:49.934 "peer_address": { 00:17:49.934 "trtype": "TCP", 00:17:49.934 "adrfam": "IPv4", 00:17:49.934 "traddr": "10.0.0.1", 00:17:49.934 "trsvcid": "60834" 00:17:49.934 }, 00:17:49.934 "auth": { 00:17:49.934 "state": "completed", 00:17:49.934 "digest": "sha256", 00:17:49.934 "dhgroup": "ffdhe6144" 00:17:49.934 } 00:17:49.934 } 00:17:49.934 ]' 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.934 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.199 12:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:17:50.763 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.763 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:50.763 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.763 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.763 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.763 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:50.763 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.763 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.020 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.277 00:17:51.277 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:51.277 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.277 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:51.535 { 00:17:51.535 "cntlid": 37, 00:17:51.535 "qid": 0, 00:17:51.535 "state": "enabled", 00:17:51.535 "listen_address": { 00:17:51.535 "trtype": "TCP", 00:17:51.535 "adrfam": "IPv4", 00:17:51.535 "traddr": "10.0.0.2", 00:17:51.535 "trsvcid": "4420" 00:17:51.535 }, 00:17:51.535 "peer_address": { 00:17:51.535 "trtype": "TCP", 00:17:51.535 "adrfam": "IPv4", 00:17:51.535 "traddr": "10.0.0.1", 00:17:51.535 "trsvcid": "60860" 00:17:51.535 }, 00:17:51.535 "auth": { 00:17:51.535 "state": "completed", 00:17:51.535 "digest": "sha256", 00:17:51.535 "dhgroup": "ffdhe6144" 00:17:51.535 } 00:17:51.535 } 00:17:51.535 ]' 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.535 12:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:51.535 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.535 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.535 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.793 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:17:52.359 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.359 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:52.359 12:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.359 12:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.359 12:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.359 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:52.359 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.359 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.618 12:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.876 00:17:52.876 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:52.876 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:52.876 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:53.134 { 00:17:53.134 "cntlid": 39, 00:17:53.134 "qid": 0, 00:17:53.134 "state": "enabled", 00:17:53.134 "listen_address": { 00:17:53.134 "trtype": "TCP", 00:17:53.134 "adrfam": "IPv4", 00:17:53.134 "traddr": "10.0.0.2", 00:17:53.134 "trsvcid": "4420" 00:17:53.134 }, 00:17:53.134 "peer_address": { 00:17:53.134 "trtype": "TCP", 00:17:53.134 "adrfam": "IPv4", 00:17:53.134 "traddr": "10.0.0.1", 00:17:53.134 "trsvcid": "60876" 00:17:53.134 }, 00:17:53.134 "auth": { 00:17:53.134 "state": "completed", 00:17:53.134 "digest": "sha256", 00:17:53.134 "dhgroup": "ffdhe6144" 00:17:53.134 } 00:17:53.134 } 00:17:53.134 ]' 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.134 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.391 12:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.957 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:54.216 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:54.474 00:17:54.474 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:54.474 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:54.474 12:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:54.732 { 00:17:54.732 "cntlid": 41, 00:17:54.732 "qid": 0, 00:17:54.732 "state": "enabled", 00:17:54.732 "listen_address": { 00:17:54.732 "trtype": "TCP", 00:17:54.732 "adrfam": "IPv4", 00:17:54.732 "traddr": "10.0.0.2", 00:17:54.732 "trsvcid": "4420" 00:17:54.732 }, 00:17:54.732 "peer_address": { 00:17:54.732 "trtype": "TCP", 00:17:54.732 "adrfam": "IPv4", 00:17:54.732 "traddr": "10.0.0.1", 00:17:54.732 "trsvcid": "60912" 00:17:54.732 }, 00:17:54.732 "auth": { 00:17:54.732 "state": "completed", 00:17:54.732 "digest": "sha256", 00:17:54.732 "dhgroup": "ffdhe8192" 00:17:54.732 } 00:17:54.732 } 00:17:54.732 ]' 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.732 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:54.990 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.990 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:54.990 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.990 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.990 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.990 12:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:17:55.556 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.556 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:55.556 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.556 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.556 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.556 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:55.556 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.556 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:55.814 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:56.380 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:56.380 { 00:17:56.380 "cntlid": 43, 00:17:56.380 "qid": 0, 00:17:56.380 "state": "enabled", 00:17:56.380 "listen_address": { 00:17:56.380 "trtype": "TCP", 00:17:56.380 "adrfam": "IPv4", 00:17:56.380 "traddr": "10.0.0.2", 00:17:56.380 "trsvcid": "4420" 00:17:56.380 }, 00:17:56.380 "peer_address": { 00:17:56.380 "trtype": "TCP", 00:17:56.380 "adrfam": "IPv4", 00:17:56.380 "traddr": "10.0.0.1", 00:17:56.380 "trsvcid": "60948" 00:17:56.380 }, 00:17:56.380 "auth": { 00:17:56.380 "state": "completed", 00:17:56.380 "digest": "sha256", 00:17:56.380 "dhgroup": "ffdhe8192" 00:17:56.380 } 00:17:56.380 } 00:17:56.380 ]' 00:17:56.380 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:56.638 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.638 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:56.638 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.638 12:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:56.638 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.638 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.638 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.896 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:57.463 12:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:58.029 00:17:58.029 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:58.030 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:58.030 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:58.288 { 00:17:58.288 "cntlid": 45, 00:17:58.288 "qid": 0, 00:17:58.288 "state": "enabled", 00:17:58.288 "listen_address": { 00:17:58.288 "trtype": "TCP", 00:17:58.288 "adrfam": "IPv4", 00:17:58.288 "traddr": "10.0.0.2", 00:17:58.288 "trsvcid": "4420" 00:17:58.288 }, 00:17:58.288 "peer_address": { 00:17:58.288 "trtype": "TCP", 00:17:58.288 "adrfam": "IPv4", 00:17:58.288 "traddr": "10.0.0.1", 00:17:58.288 "trsvcid": "60978" 00:17:58.288 }, 00:17:58.288 "auth": { 00:17:58.288 "state": "completed", 00:17:58.288 "digest": "sha256", 00:17:58.288 "dhgroup": "ffdhe8192" 00:17:58.288 } 00:17:58.288 } 00:17:58.288 ]' 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.288 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.546 12:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.174 12:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.740 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.740 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:59.740 { 00:17:59.740 "cntlid": 47, 00:17:59.740 "qid": 0, 00:17:59.740 "state": "enabled", 00:17:59.740 "listen_address": { 00:17:59.740 "trtype": "TCP", 00:17:59.740 "adrfam": "IPv4", 00:17:59.740 "traddr": "10.0.0.2", 00:17:59.740 "trsvcid": "4420" 00:17:59.740 }, 00:17:59.740 "peer_address": { 00:17:59.740 "trtype": "TCP", 00:17:59.740 "adrfam": "IPv4", 00:17:59.740 "traddr": "10.0.0.1", 00:17:59.740 "trsvcid": "45528" 00:17:59.740 }, 00:17:59.740 "auth": { 00:17:59.740 "state": "completed", 00:17:59.740 "digest": "sha256", 00:17:59.740 "dhgroup": "ffdhe8192" 00:17:59.740 } 00:17:59.740 } 00:17:59.741 ]' 00:17:59.741 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:59.998 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.998 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:59.998 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.998 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:59.998 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.998 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.998 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.255 12:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:00.820 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:01.079 00:18:01.079 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:01.079 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:01.079 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:01.337 { 00:18:01.337 "cntlid": 49, 00:18:01.337 "qid": 0, 00:18:01.337 "state": "enabled", 00:18:01.337 "listen_address": { 00:18:01.337 "trtype": "TCP", 00:18:01.337 "adrfam": "IPv4", 00:18:01.337 "traddr": "10.0.0.2", 00:18:01.337 "trsvcid": "4420" 00:18:01.337 }, 00:18:01.337 "peer_address": { 00:18:01.337 "trtype": "TCP", 00:18:01.337 "adrfam": "IPv4", 00:18:01.337 "traddr": "10.0.0.1", 00:18:01.337 "trsvcid": "45552" 00:18:01.337 }, 00:18:01.337 "auth": { 00:18:01.337 "state": "completed", 00:18:01.337 "digest": "sha384", 00:18:01.337 "dhgroup": "null" 00:18:01.337 } 00:18:01.337 } 00:18:01.337 ]' 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.337 12:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.607 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:02.172 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.172 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:02.173 12:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.173 12:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.173 12:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.173 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:02.173 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.173 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:02.432 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:02.691 00:18:02.691 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:02.691 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.691 12:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:02.691 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.691 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.691 12:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.691 12:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.691 12:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.691 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:02.691 { 00:18:02.691 "cntlid": 51, 00:18:02.691 "qid": 0, 00:18:02.691 "state": "enabled", 00:18:02.691 "listen_address": { 00:18:02.691 "trtype": "TCP", 00:18:02.691 "adrfam": "IPv4", 00:18:02.691 "traddr": "10.0.0.2", 00:18:02.691 "trsvcid": "4420" 00:18:02.691 }, 00:18:02.691 "peer_address": { 00:18:02.691 "trtype": "TCP", 00:18:02.691 "adrfam": "IPv4", 00:18:02.691 "traddr": "10.0.0.1", 00:18:02.691 "trsvcid": "45588" 00:18:02.691 }, 00:18:02.691 "auth": { 00:18:02.691 "state": "completed", 00:18:02.691 "digest": "sha384", 00:18:02.691 "dhgroup": "null" 00:18:02.691 } 00:18:02.691 } 00:18:02.691 ]' 00:18:02.691 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:02.949 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.949 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:02.949 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:02.949 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:02.949 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.949 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.949 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.207 12:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:03.772 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:04.030 00:18:04.030 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:04.030 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:04.030 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:04.288 { 00:18:04.288 "cntlid": 53, 00:18:04.288 "qid": 0, 00:18:04.288 "state": "enabled", 00:18:04.288 "listen_address": { 00:18:04.288 "trtype": "TCP", 00:18:04.288 "adrfam": "IPv4", 00:18:04.288 "traddr": "10.0.0.2", 00:18:04.288 "trsvcid": "4420" 00:18:04.288 }, 00:18:04.288 "peer_address": { 00:18:04.288 "trtype": "TCP", 00:18:04.288 "adrfam": "IPv4", 00:18:04.288 "traddr": "10.0.0.1", 00:18:04.288 "trsvcid": "45620" 00:18:04.288 }, 00:18:04.288 "auth": { 00:18:04.288 "state": "completed", 00:18:04.288 "digest": "sha384", 00:18:04.288 "dhgroup": "null" 00:18:04.288 } 00:18:04.288 } 00:18:04.288 ]' 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.288 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.546 12:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:05.112 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.112 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:05.112 12:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.112 12:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.112 12:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.112 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:05.112 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.112 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.369 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:18:05.369 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:05.369 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.369 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.369 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.369 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:05.369 12:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.369 12:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.370 12:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.370 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.370 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.632 00:18:05.632 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:05.632 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:05.632 12:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.632 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.632 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.632 12:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.632 12:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.632 12:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.632 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:05.632 { 00:18:05.632 "cntlid": 55, 00:18:05.632 "qid": 0, 00:18:05.632 "state": "enabled", 00:18:05.632 "listen_address": { 00:18:05.632 "trtype": "TCP", 00:18:05.632 "adrfam": "IPv4", 00:18:05.632 "traddr": "10.0.0.2", 00:18:05.632 "trsvcid": "4420" 00:18:05.632 }, 00:18:05.632 "peer_address": { 00:18:05.632 "trtype": "TCP", 00:18:05.632 "adrfam": "IPv4", 00:18:05.632 "traddr": "10.0.0.1", 00:18:05.632 "trsvcid": "45640" 00:18:05.632 }, 00:18:05.632 "auth": { 00:18:05.632 "state": "completed", 00:18:05.632 "digest": "sha384", 00:18:05.632 "dhgroup": "null" 00:18:05.632 } 00:18:05.632 } 00:18:05.632 ]' 00:18:05.632 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:05.891 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.891 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:05.891 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:05.891 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:05.891 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.891 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.891 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.148 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:06.714 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.714 12:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:06.714 12:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.714 12:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:06.714 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:06.973 00:18:06.973 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:06.973 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:06.973 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:07.231 { 00:18:07.231 "cntlid": 57, 00:18:07.231 "qid": 0, 00:18:07.231 "state": "enabled", 00:18:07.231 "listen_address": { 00:18:07.231 "trtype": "TCP", 00:18:07.231 "adrfam": "IPv4", 00:18:07.231 "traddr": "10.0.0.2", 00:18:07.231 "trsvcid": "4420" 00:18:07.231 }, 00:18:07.231 "peer_address": { 00:18:07.231 "trtype": "TCP", 00:18:07.231 "adrfam": "IPv4", 00:18:07.231 "traddr": "10.0.0.1", 00:18:07.231 "trsvcid": "45678" 00:18:07.231 }, 00:18:07.231 "auth": { 00:18:07.231 "state": "completed", 00:18:07.231 "digest": "sha384", 00:18:07.231 "dhgroup": "ffdhe2048" 00:18:07.231 } 00:18:07.231 } 00:18:07.231 ]' 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.231 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.490 12:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:08.057 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.057 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:08.057 12:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.057 12:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.057 12:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.057 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:08.057 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.057 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.315 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:18:08.315 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:08.316 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:08.574 00:18:08.574 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:08.574 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:08.574 12:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.574 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.574 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.575 12:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.575 12:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.575 12:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.575 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:08.575 { 00:18:08.575 "cntlid": 59, 00:18:08.575 "qid": 0, 00:18:08.575 "state": "enabled", 00:18:08.575 "listen_address": { 00:18:08.575 "trtype": "TCP", 00:18:08.575 "adrfam": "IPv4", 00:18:08.575 "traddr": "10.0.0.2", 00:18:08.575 "trsvcid": "4420" 00:18:08.575 }, 00:18:08.575 "peer_address": { 00:18:08.575 "trtype": "TCP", 00:18:08.575 "adrfam": "IPv4", 00:18:08.575 "traddr": "10.0.0.1", 00:18:08.575 "trsvcid": "45704" 00:18:08.575 }, 00:18:08.575 "auth": { 00:18:08.575 "state": "completed", 00:18:08.575 "digest": "sha384", 00:18:08.575 "dhgroup": "ffdhe2048" 00:18:08.575 } 00:18:08.575 } 00:18:08.575 ]' 00:18:08.575 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:08.833 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.833 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:08.833 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.833 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:08.833 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.833 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.833 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.092 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:09.659 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.659 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:09.659 12:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.659 12:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.659 12:19:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.659 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:09.659 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.659 12:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:09.659 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:09.917 00:18:09.917 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:09.917 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.917 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:10.175 { 00:18:10.175 "cntlid": 61, 00:18:10.175 "qid": 0, 00:18:10.175 "state": "enabled", 00:18:10.175 "listen_address": { 00:18:10.175 "trtype": "TCP", 00:18:10.175 "adrfam": "IPv4", 00:18:10.175 "traddr": "10.0.0.2", 00:18:10.175 "trsvcid": "4420" 00:18:10.175 }, 00:18:10.175 "peer_address": { 00:18:10.175 "trtype": "TCP", 00:18:10.175 "adrfam": "IPv4", 00:18:10.175 "traddr": "10.0.0.1", 00:18:10.175 "trsvcid": "45558" 00:18:10.175 }, 00:18:10.175 "auth": { 00:18:10.175 "state": "completed", 00:18:10.175 "digest": "sha384", 00:18:10.175 "dhgroup": "ffdhe2048" 00:18:10.175 } 00:18:10.175 } 00:18:10.175 ]' 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.175 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.433 12:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:10.999 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.999 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:10.999 12:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.999 12:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.999 12:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.999 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:10.999 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:10.999 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.258 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.517 00:18:11.517 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:11.517 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:11.517 12:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.517 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.517 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.517 12:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.517 12:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.517 12:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.517 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:11.517 { 00:18:11.517 "cntlid": 63, 00:18:11.517 "qid": 0, 00:18:11.517 "state": "enabled", 00:18:11.517 "listen_address": { 00:18:11.517 "trtype": "TCP", 00:18:11.517 "adrfam": "IPv4", 00:18:11.517 "traddr": "10.0.0.2", 00:18:11.517 "trsvcid": "4420" 00:18:11.517 }, 00:18:11.517 "peer_address": { 00:18:11.517 "trtype": "TCP", 00:18:11.517 "adrfam": "IPv4", 00:18:11.517 "traddr": "10.0.0.1", 00:18:11.517 "trsvcid": "45604" 00:18:11.517 }, 00:18:11.517 "auth": { 00:18:11.517 "state": "completed", 00:18:11.517 "digest": "sha384", 00:18:11.517 "dhgroup": "ffdhe2048" 00:18:11.517 } 00:18:11.517 } 00:18:11.517 ]' 00:18:11.517 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:11.775 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.775 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:11.775 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.776 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:11.776 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.776 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.776 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.063 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:12.629 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.629 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:12.629 12:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.629 12:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.629 12:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.629 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.629 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:12.630 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.630 12:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:12.630 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:12.888 00:18:12.888 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:12.888 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:12.888 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:13.147 { 00:18:13.147 "cntlid": 65, 00:18:13.147 "qid": 0, 00:18:13.147 "state": "enabled", 00:18:13.147 "listen_address": { 00:18:13.147 "trtype": "TCP", 00:18:13.147 "adrfam": "IPv4", 00:18:13.147 "traddr": "10.0.0.2", 00:18:13.147 "trsvcid": "4420" 00:18:13.147 }, 00:18:13.147 "peer_address": { 00:18:13.147 "trtype": "TCP", 00:18:13.147 "adrfam": "IPv4", 00:18:13.147 "traddr": "10.0.0.1", 00:18:13.147 "trsvcid": "45638" 00:18:13.147 }, 00:18:13.147 "auth": { 00:18:13.147 "state": "completed", 00:18:13.147 "digest": "sha384", 00:18:13.147 "dhgroup": "ffdhe3072" 00:18:13.147 } 00:18:13.147 } 00:18:13.147 ]' 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.147 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.405 12:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:13.970 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.970 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:13.970 12:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.970 12:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.971 12:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.971 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:13.971 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.971 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:14.229 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:14.489 00:18:14.489 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:14.489 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:14.489 12:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.489 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.489 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.489 12:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.489 12:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.489 12:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.489 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:14.489 { 00:18:14.489 "cntlid": 67, 00:18:14.489 "qid": 0, 00:18:14.489 "state": "enabled", 00:18:14.489 "listen_address": { 00:18:14.489 "trtype": "TCP", 00:18:14.489 "adrfam": "IPv4", 00:18:14.489 "traddr": "10.0.0.2", 00:18:14.489 "trsvcid": "4420" 00:18:14.489 }, 00:18:14.489 "peer_address": { 00:18:14.489 "trtype": "TCP", 00:18:14.489 "adrfam": "IPv4", 00:18:14.489 "traddr": "10.0.0.1", 00:18:14.489 "trsvcid": "45664" 00:18:14.489 }, 00:18:14.489 "auth": { 00:18:14.489 "state": "completed", 00:18:14.489 "digest": "sha384", 00:18:14.489 "dhgroup": "ffdhe3072" 00:18:14.489 } 00:18:14.489 } 00:18:14.489 ]' 00:18:14.748 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:14.748 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.748 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:14.748 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.748 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:14.748 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.748 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.748 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.006 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:15.573 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.573 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:15.573 12:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.573 12:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.573 12:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.573 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:15.573 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.573 12:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:15.573 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:15.831 00:18:15.831 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:15.831 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:15.831 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:16.090 { 00:18:16.090 "cntlid": 69, 00:18:16.090 "qid": 0, 00:18:16.090 "state": "enabled", 00:18:16.090 "listen_address": { 00:18:16.090 "trtype": "TCP", 00:18:16.090 "adrfam": "IPv4", 00:18:16.090 "traddr": "10.0.0.2", 00:18:16.090 "trsvcid": "4420" 00:18:16.090 }, 00:18:16.090 "peer_address": { 00:18:16.090 "trtype": "TCP", 00:18:16.090 "adrfam": "IPv4", 00:18:16.090 "traddr": "10.0.0.1", 00:18:16.090 "trsvcid": "45682" 00:18:16.090 }, 00:18:16.090 "auth": { 00:18:16.090 "state": "completed", 00:18:16.090 "digest": "sha384", 00:18:16.090 "dhgroup": "ffdhe3072" 00:18:16.090 } 00:18:16.090 } 00:18:16.090 ]' 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.090 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.347 12:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:16.912 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.912 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:16.912 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.912 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.912 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.912 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:16.912 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:16.912 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.170 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.428 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:17.428 { 00:18:17.428 "cntlid": 71, 00:18:17.428 "qid": 0, 00:18:17.428 "state": "enabled", 00:18:17.428 "listen_address": { 00:18:17.428 "trtype": "TCP", 00:18:17.428 "adrfam": "IPv4", 00:18:17.428 "traddr": "10.0.0.2", 00:18:17.428 "trsvcid": "4420" 00:18:17.428 }, 00:18:17.428 "peer_address": { 00:18:17.428 "trtype": "TCP", 00:18:17.428 "adrfam": "IPv4", 00:18:17.428 "traddr": "10.0.0.1", 00:18:17.428 "trsvcid": "45716" 00:18:17.428 }, 00:18:17.428 "auth": { 00:18:17.428 "state": "completed", 00:18:17.428 "digest": "sha384", 00:18:17.428 "dhgroup": "ffdhe3072" 00:18:17.428 } 00:18:17.428 } 00:18:17.428 ]' 00:18:17.428 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:17.686 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.686 12:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:17.686 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:17.686 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:17.686 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.686 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.686 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.944 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:18.511 12:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:18.770 00:18:18.770 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:18.770 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:18.770 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:19.028 { 00:18:19.028 "cntlid": 73, 00:18:19.028 "qid": 0, 00:18:19.028 "state": "enabled", 00:18:19.028 "listen_address": { 00:18:19.028 "trtype": "TCP", 00:18:19.028 "adrfam": "IPv4", 00:18:19.028 "traddr": "10.0.0.2", 00:18:19.028 "trsvcid": "4420" 00:18:19.028 }, 00:18:19.028 "peer_address": { 00:18:19.028 "trtype": "TCP", 00:18:19.028 "adrfam": "IPv4", 00:18:19.028 "traddr": "10.0.0.1", 00:18:19.028 "trsvcid": "45758" 00:18:19.028 }, 00:18:19.028 "auth": { 00:18:19.028 "state": "completed", 00:18:19.028 "digest": "sha384", 00:18:19.028 "dhgroup": "ffdhe4096" 00:18:19.028 } 00:18:19.028 } 00:18:19.028 ]' 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.028 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:19.287 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.287 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.287 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.287 12:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:19.853 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.853 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:19.853 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.853 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.853 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.853 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:19.853 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.853 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:20.111 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:20.370 00:18:20.370 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:20.370 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:20.370 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.628 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.628 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.628 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.628 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.628 12:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.628 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:20.628 { 00:18:20.628 "cntlid": 75, 00:18:20.628 "qid": 0, 00:18:20.628 "state": "enabled", 00:18:20.628 "listen_address": { 00:18:20.628 "trtype": "TCP", 00:18:20.628 "adrfam": "IPv4", 00:18:20.628 "traddr": "10.0.0.2", 00:18:20.628 "trsvcid": "4420" 00:18:20.628 }, 00:18:20.628 "peer_address": { 00:18:20.628 "trtype": "TCP", 00:18:20.628 "adrfam": "IPv4", 00:18:20.628 "traddr": "10.0.0.1", 00:18:20.628 "trsvcid": "36536" 00:18:20.628 }, 00:18:20.628 "auth": { 00:18:20.628 "state": "completed", 00:18:20.628 "digest": "sha384", 00:18:20.628 "dhgroup": "ffdhe4096" 00:18:20.628 } 00:18:20.628 } 00:18:20.628 ]' 00:18:20.628 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:20.629 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.629 12:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:20.629 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.629 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:20.629 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.629 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.629 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.887 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:21.454 12:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:21.713 00:18:21.971 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:21.971 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.971 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:21.971 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.971 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.972 12:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.972 12:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.972 12:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.972 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:21.972 { 00:18:21.972 "cntlid": 77, 00:18:21.972 "qid": 0, 00:18:21.972 "state": "enabled", 00:18:21.972 "listen_address": { 00:18:21.972 "trtype": "TCP", 00:18:21.972 "adrfam": "IPv4", 00:18:21.972 "traddr": "10.0.0.2", 00:18:21.972 "trsvcid": "4420" 00:18:21.972 }, 00:18:21.972 "peer_address": { 00:18:21.972 "trtype": "TCP", 00:18:21.972 "adrfam": "IPv4", 00:18:21.972 "traddr": "10.0.0.1", 00:18:21.972 "trsvcid": "36552" 00:18:21.972 }, 00:18:21.972 "auth": { 00:18:21.972 "state": "completed", 00:18:21.972 "digest": "sha384", 00:18:21.972 "dhgroup": "ffdhe4096" 00:18:21.972 } 00:18:21.972 } 00:18:21.972 ]' 00:18:21.972 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:21.972 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.972 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:22.230 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.231 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:22.231 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.231 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.231 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.231 12:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:22.797 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.797 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:22.797 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.797 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.797 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.797 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:22.797 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.797 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.055 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.314 00:18:23.314 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:23.314 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:23.314 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:23.572 { 00:18:23.572 "cntlid": 79, 00:18:23.572 "qid": 0, 00:18:23.572 "state": "enabled", 00:18:23.572 "listen_address": { 00:18:23.572 "trtype": "TCP", 00:18:23.572 "adrfam": "IPv4", 00:18:23.572 "traddr": "10.0.0.2", 00:18:23.572 "trsvcid": "4420" 00:18:23.572 }, 00:18:23.572 "peer_address": { 00:18:23.572 "trtype": "TCP", 00:18:23.572 "adrfam": "IPv4", 00:18:23.572 "traddr": "10.0.0.1", 00:18:23.572 "trsvcid": "36580" 00:18:23.572 }, 00:18:23.572 "auth": { 00:18:23.572 "state": "completed", 00:18:23.572 "digest": "sha384", 00:18:23.572 "dhgroup": "ffdhe4096" 00:18:23.572 } 00:18:23.572 } 00:18:23.572 ]' 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.572 12:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:23.572 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.572 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:23.572 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.572 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.572 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.830 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.397 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:24.658 12:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:24.919 00:18:24.919 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:24.919 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:24.919 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:25.176 { 00:18:25.176 "cntlid": 81, 00:18:25.176 "qid": 0, 00:18:25.176 "state": "enabled", 00:18:25.176 "listen_address": { 00:18:25.176 "trtype": "TCP", 00:18:25.176 "adrfam": "IPv4", 00:18:25.176 "traddr": "10.0.0.2", 00:18:25.176 "trsvcid": "4420" 00:18:25.176 }, 00:18:25.176 "peer_address": { 00:18:25.176 "trtype": "TCP", 00:18:25.176 "adrfam": "IPv4", 00:18:25.176 "traddr": "10.0.0.1", 00:18:25.176 "trsvcid": "36602" 00:18:25.176 }, 00:18:25.176 "auth": { 00:18:25.176 "state": "completed", 00:18:25.176 "digest": "sha384", 00:18:25.176 "dhgroup": "ffdhe6144" 00:18:25.176 } 00:18:25.176 } 00:18:25.176 ]' 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.176 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.435 12:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:26.001 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.001 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:26.001 12:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.001 12:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.001 12:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.001 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:26.001 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.001 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:26.260 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:26.518 00:18:26.518 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:26.518 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:26.518 12:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.775 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.775 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.775 12:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:26.776 { 00:18:26.776 "cntlid": 83, 00:18:26.776 "qid": 0, 00:18:26.776 "state": "enabled", 00:18:26.776 "listen_address": { 00:18:26.776 "trtype": "TCP", 00:18:26.776 "adrfam": "IPv4", 00:18:26.776 "traddr": "10.0.0.2", 00:18:26.776 "trsvcid": "4420" 00:18:26.776 }, 00:18:26.776 "peer_address": { 00:18:26.776 "trtype": "TCP", 00:18:26.776 "adrfam": "IPv4", 00:18:26.776 "traddr": "10.0.0.1", 00:18:26.776 "trsvcid": "36626" 00:18:26.776 }, 00:18:26.776 "auth": { 00:18:26.776 "state": "completed", 00:18:26.776 "digest": "sha384", 00:18:26.776 "dhgroup": "ffdhe6144" 00:18:26.776 } 00:18:26.776 } 00:18:26.776 ]' 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.776 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.033 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:27.597 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.597 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:27.597 12:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.597 12:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.597 12:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.597 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:27.597 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.597 12:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.597 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:18:27.597 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:27.597 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.597 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:27.598 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.598 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:27.598 12:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.598 12:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.856 12:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.856 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:27.856 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:28.114 00:18:28.114 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:28.114 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:28.114 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.114 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.114 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.114 12:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.114 12:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:28.373 { 00:18:28.373 "cntlid": 85, 00:18:28.373 "qid": 0, 00:18:28.373 "state": "enabled", 00:18:28.373 "listen_address": { 00:18:28.373 "trtype": "TCP", 00:18:28.373 "adrfam": "IPv4", 00:18:28.373 "traddr": "10.0.0.2", 00:18:28.373 "trsvcid": "4420" 00:18:28.373 }, 00:18:28.373 "peer_address": { 00:18:28.373 "trtype": "TCP", 00:18:28.373 "adrfam": "IPv4", 00:18:28.373 "traddr": "10.0.0.1", 00:18:28.373 "trsvcid": "36670" 00:18:28.373 }, 00:18:28.373 "auth": { 00:18:28.373 "state": "completed", 00:18:28.373 "digest": "sha384", 00:18:28.373 "dhgroup": "ffdhe6144" 00:18:28.373 } 00:18:28.373 } 00:18:28.373 ]' 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.373 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.631 12:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.197 12:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.762 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:29.762 { 00:18:29.762 "cntlid": 87, 00:18:29.762 "qid": 0, 00:18:29.762 "state": "enabled", 00:18:29.762 "listen_address": { 00:18:29.762 "trtype": "TCP", 00:18:29.762 "adrfam": "IPv4", 00:18:29.762 "traddr": "10.0.0.2", 00:18:29.762 "trsvcid": "4420" 00:18:29.762 }, 00:18:29.762 "peer_address": { 00:18:29.762 "trtype": "TCP", 00:18:29.762 "adrfam": "IPv4", 00:18:29.762 "traddr": "10.0.0.1", 00:18:29.762 "trsvcid": "48042" 00:18:29.762 }, 00:18:29.762 "auth": { 00:18:29.762 "state": "completed", 00:18:29.762 "digest": "sha384", 00:18:29.762 "dhgroup": "ffdhe6144" 00:18:29.762 } 00:18:29.762 } 00:18:29.762 ]' 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:29.762 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.763 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:30.021 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.021 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.021 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.021 12:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.586 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:30.844 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:31.410 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:31.410 { 00:18:31.410 "cntlid": 89, 00:18:31.410 "qid": 0, 00:18:31.410 "state": "enabled", 00:18:31.410 "listen_address": { 00:18:31.410 "trtype": "TCP", 00:18:31.410 "adrfam": "IPv4", 00:18:31.410 "traddr": "10.0.0.2", 00:18:31.410 "trsvcid": "4420" 00:18:31.410 }, 00:18:31.410 "peer_address": { 00:18:31.410 "trtype": "TCP", 00:18:31.410 "adrfam": "IPv4", 00:18:31.410 "traddr": "10.0.0.1", 00:18:31.410 "trsvcid": "48074" 00:18:31.410 }, 00:18:31.410 "auth": { 00:18:31.410 "state": "completed", 00:18:31.410 "digest": "sha384", 00:18:31.410 "dhgroup": "ffdhe8192" 00:18:31.410 } 00:18:31.410 } 00:18:31.410 ]' 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.410 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:31.668 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.668 12:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:31.668 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.668 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.668 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.926 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:32.492 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.492 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:32.492 12:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.492 12:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.492 12:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.492 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:32.492 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:32.493 12:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:33.059 00:18:33.059 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:33.059 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:33.059 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.317 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.317 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.317 12:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.317 12:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.317 12:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.317 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:33.317 { 00:18:33.317 "cntlid": 91, 00:18:33.317 "qid": 0, 00:18:33.317 "state": "enabled", 00:18:33.317 "listen_address": { 00:18:33.317 "trtype": "TCP", 00:18:33.317 "adrfam": "IPv4", 00:18:33.317 "traddr": "10.0.0.2", 00:18:33.317 "trsvcid": "4420" 00:18:33.317 }, 00:18:33.317 "peer_address": { 00:18:33.317 "trtype": "TCP", 00:18:33.318 "adrfam": "IPv4", 00:18:33.318 "traddr": "10.0.0.1", 00:18:33.318 "trsvcid": "48096" 00:18:33.318 }, 00:18:33.318 "auth": { 00:18:33.318 "state": "completed", 00:18:33.318 "digest": "sha384", 00:18:33.318 "dhgroup": "ffdhe8192" 00:18:33.318 } 00:18:33.318 } 00:18:33.318 ]' 00:18:33.318 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:33.318 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.318 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:33.318 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.318 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:33.318 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.318 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.318 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.575 12:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.141 12:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.142 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:34.142 12:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:34.711 00:18:34.711 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:34.711 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:34.711 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:34.970 { 00:18:34.970 "cntlid": 93, 00:18:34.970 "qid": 0, 00:18:34.970 "state": "enabled", 00:18:34.970 "listen_address": { 00:18:34.970 "trtype": "TCP", 00:18:34.970 "adrfam": "IPv4", 00:18:34.970 "traddr": "10.0.0.2", 00:18:34.970 "trsvcid": "4420" 00:18:34.970 }, 00:18:34.970 "peer_address": { 00:18:34.970 "trtype": "TCP", 00:18:34.970 "adrfam": "IPv4", 00:18:34.970 "traddr": "10.0.0.1", 00:18:34.970 "trsvcid": "48134" 00:18:34.970 }, 00:18:34.970 "auth": { 00:18:34.970 "state": "completed", 00:18:34.970 "digest": "sha384", 00:18:34.970 "dhgroup": "ffdhe8192" 00:18:34.970 } 00:18:34.970 } 00:18:34.970 ]' 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.970 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.228 12:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:35.794 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.794 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:35.794 12:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.794 12:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.794 12:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.794 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:35.794 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.794 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.052 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.311 00:18:36.311 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:36.311 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:36.311 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.569 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.569 12:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.569 12:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.569 12:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.569 12:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.569 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:36.569 { 00:18:36.569 "cntlid": 95, 00:18:36.569 "qid": 0, 00:18:36.569 "state": "enabled", 00:18:36.569 "listen_address": { 00:18:36.569 "trtype": "TCP", 00:18:36.569 "adrfam": "IPv4", 00:18:36.569 "traddr": "10.0.0.2", 00:18:36.569 "trsvcid": "4420" 00:18:36.569 }, 00:18:36.569 "peer_address": { 00:18:36.569 "trtype": "TCP", 00:18:36.569 "adrfam": "IPv4", 00:18:36.569 "traddr": "10.0.0.1", 00:18:36.569 "trsvcid": "48154" 00:18:36.569 }, 00:18:36.569 "auth": { 00:18:36.569 "state": "completed", 00:18:36.569 "digest": "sha384", 00:18:36.569 "dhgroup": "ffdhe8192" 00:18:36.569 } 00:18:36.569 } 00:18:36.569 ]' 00:18:36.569 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:36.569 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.569 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:36.569 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.827 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:36.827 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.827 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.827 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.827 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:37.394 12:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:37.652 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:37.928 00:18:37.928 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:37.928 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:37.928 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:38.197 { 00:18:38.197 "cntlid": 97, 00:18:38.197 "qid": 0, 00:18:38.197 "state": "enabled", 00:18:38.197 "listen_address": { 00:18:38.197 "trtype": "TCP", 00:18:38.197 "adrfam": "IPv4", 00:18:38.197 "traddr": "10.0.0.2", 00:18:38.197 "trsvcid": "4420" 00:18:38.197 }, 00:18:38.197 "peer_address": { 00:18:38.197 "trtype": "TCP", 00:18:38.197 "adrfam": "IPv4", 00:18:38.197 "traddr": "10.0.0.1", 00:18:38.197 "trsvcid": "48184" 00:18:38.197 }, 00:18:38.197 "auth": { 00:18:38.197 "state": "completed", 00:18:38.197 "digest": "sha512", 00:18:38.197 "dhgroup": "null" 00:18:38.197 } 00:18:38.197 } 00:18:38.197 ]' 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.197 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.455 12:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:39.022 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:39.281 00:18:39.281 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:39.281 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.281 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:39.539 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.539 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.539 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.539 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.540 12:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.540 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:39.540 { 00:18:39.540 "cntlid": 99, 00:18:39.540 "qid": 0, 00:18:39.540 "state": "enabled", 00:18:39.540 "listen_address": { 00:18:39.540 "trtype": "TCP", 00:18:39.540 "adrfam": "IPv4", 00:18:39.540 "traddr": "10.0.0.2", 00:18:39.540 "trsvcid": "4420" 00:18:39.540 }, 00:18:39.540 "peer_address": { 00:18:39.540 "trtype": "TCP", 00:18:39.540 "adrfam": "IPv4", 00:18:39.540 "traddr": "10.0.0.1", 00:18:39.540 "trsvcid": "33142" 00:18:39.540 }, 00:18:39.540 "auth": { 00:18:39.540 "state": "completed", 00:18:39.540 "digest": "sha512", 00:18:39.540 "dhgroup": "null" 00:18:39.540 } 00:18:39.540 } 00:18:39.540 ]' 00:18:39.540 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:39.540 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.540 12:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:39.540 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:39.540 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:39.798 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.798 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.798 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.798 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:40.366 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.366 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:40.366 12:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.366 12:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.366 12:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.366 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:40.366 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.366 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:40.625 12:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:40.884 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:40.884 { 00:18:40.884 "cntlid": 101, 00:18:40.884 "qid": 0, 00:18:40.884 "state": "enabled", 00:18:40.884 "listen_address": { 00:18:40.884 "trtype": "TCP", 00:18:40.884 "adrfam": "IPv4", 00:18:40.884 "traddr": "10.0.0.2", 00:18:40.884 "trsvcid": "4420" 00:18:40.884 }, 00:18:40.884 "peer_address": { 00:18:40.884 "trtype": "TCP", 00:18:40.884 "adrfam": "IPv4", 00:18:40.884 "traddr": "10.0.0.1", 00:18:40.884 "trsvcid": "33168" 00:18:40.884 }, 00:18:40.884 "auth": { 00:18:40.884 "state": "completed", 00:18:40.884 "digest": "sha512", 00:18:40.884 "dhgroup": "null" 00:18:40.884 } 00:18:40.884 } 00:18:40.884 ]' 00:18:40.884 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:41.143 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.143 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:41.143 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:41.143 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:41.143 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.143 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.143 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.401 12:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:41.968 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.969 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.969 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.969 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.969 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.227 00:18:42.227 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:42.227 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:42.227 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.484 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.484 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.484 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.484 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.484 12:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.484 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:42.484 { 00:18:42.484 "cntlid": 103, 00:18:42.484 "qid": 0, 00:18:42.484 "state": "enabled", 00:18:42.484 "listen_address": { 00:18:42.484 "trtype": "TCP", 00:18:42.484 "adrfam": "IPv4", 00:18:42.484 "traddr": "10.0.0.2", 00:18:42.484 "trsvcid": "4420" 00:18:42.484 }, 00:18:42.484 "peer_address": { 00:18:42.484 "trtype": "TCP", 00:18:42.484 "adrfam": "IPv4", 00:18:42.484 "traddr": "10.0.0.1", 00:18:42.484 "trsvcid": "33192" 00:18:42.484 }, 00:18:42.484 "auth": { 00:18:42.484 "state": "completed", 00:18:42.485 "digest": "sha512", 00:18:42.485 "dhgroup": "null" 00:18:42.485 } 00:18:42.485 } 00:18:42.485 ]' 00:18:42.485 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:42.485 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.485 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:42.485 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:42.485 12:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:42.742 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.742 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.742 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.742 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.309 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:43.567 12:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:43.826 00:18:43.826 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:43.826 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.826 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:43.826 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.084 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:44.085 { 00:18:44.085 "cntlid": 105, 00:18:44.085 "qid": 0, 00:18:44.085 "state": "enabled", 00:18:44.085 "listen_address": { 00:18:44.085 "trtype": "TCP", 00:18:44.085 "adrfam": "IPv4", 00:18:44.085 "traddr": "10.0.0.2", 00:18:44.085 "trsvcid": "4420" 00:18:44.085 }, 00:18:44.085 "peer_address": { 00:18:44.085 "trtype": "TCP", 00:18:44.085 "adrfam": "IPv4", 00:18:44.085 "traddr": "10.0.0.1", 00:18:44.085 "trsvcid": "33226" 00:18:44.085 }, 00:18:44.085 "auth": { 00:18:44.085 "state": "completed", 00:18:44.085 "digest": "sha512", 00:18:44.085 "dhgroup": "ffdhe2048" 00:18:44.085 } 00:18:44.085 } 00:18:44.085 ]' 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.085 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.343 12:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.908 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.166 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.166 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:45.166 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:45.166 00:18:45.166 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:45.166 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:45.166 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:45.424 { 00:18:45.424 "cntlid": 107, 00:18:45.424 "qid": 0, 00:18:45.424 "state": "enabled", 00:18:45.424 "listen_address": { 00:18:45.424 "trtype": "TCP", 00:18:45.424 "adrfam": "IPv4", 00:18:45.424 "traddr": "10.0.0.2", 00:18:45.424 "trsvcid": "4420" 00:18:45.424 }, 00:18:45.424 "peer_address": { 00:18:45.424 "trtype": "TCP", 00:18:45.424 "adrfam": "IPv4", 00:18:45.424 "traddr": "10.0.0.1", 00:18:45.424 "trsvcid": "33238" 00:18:45.424 }, 00:18:45.424 "auth": { 00:18:45.424 "state": "completed", 00:18:45.424 "digest": "sha512", 00:18:45.424 "dhgroup": "ffdhe2048" 00:18:45.424 } 00:18:45.424 } 00:18:45.424 ]' 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.424 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:45.682 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.682 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:45.682 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.682 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.682 12:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.682 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:46.249 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.249 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:46.249 12:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.249 12:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.249 12:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.249 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:46.249 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.249 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:46.507 12:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:46.765 00:18:46.765 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:46.765 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:46.765 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:47.024 { 00:18:47.024 "cntlid": 109, 00:18:47.024 "qid": 0, 00:18:47.024 "state": "enabled", 00:18:47.024 "listen_address": { 00:18:47.024 "trtype": "TCP", 00:18:47.024 "adrfam": "IPv4", 00:18:47.024 "traddr": "10.0.0.2", 00:18:47.024 "trsvcid": "4420" 00:18:47.024 }, 00:18:47.024 "peer_address": { 00:18:47.024 "trtype": "TCP", 00:18:47.024 "adrfam": "IPv4", 00:18:47.024 "traddr": "10.0.0.1", 00:18:47.024 "trsvcid": "33268" 00:18:47.024 }, 00:18:47.024 "auth": { 00:18:47.024 "state": "completed", 00:18:47.024 "digest": "sha512", 00:18:47.024 "dhgroup": "ffdhe2048" 00:18:47.024 } 00:18:47.024 } 00:18:47.024 ]' 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.024 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.282 12:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:47.848 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.848 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:47.848 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.849 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.849 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.849 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:47.849 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.849 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.107 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.366 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:48.366 { 00:18:48.366 "cntlid": 111, 00:18:48.366 "qid": 0, 00:18:48.366 "state": "enabled", 00:18:48.366 "listen_address": { 00:18:48.366 "trtype": "TCP", 00:18:48.366 "adrfam": "IPv4", 00:18:48.366 "traddr": "10.0.0.2", 00:18:48.366 "trsvcid": "4420" 00:18:48.366 }, 00:18:48.366 "peer_address": { 00:18:48.366 "trtype": "TCP", 00:18:48.366 "adrfam": "IPv4", 00:18:48.366 "traddr": "10.0.0.1", 00:18:48.366 "trsvcid": "33290" 00:18:48.366 }, 00:18:48.366 "auth": { 00:18:48.366 "state": "completed", 00:18:48.366 "digest": "sha512", 00:18:48.366 "dhgroup": "ffdhe2048" 00:18:48.366 } 00:18:48.366 } 00:18:48.366 ]' 00:18:48.366 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:48.624 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.624 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:48.624 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.624 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:48.624 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.624 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.624 12:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.882 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:49.449 12:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:49.707 00:18:49.707 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:49.707 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:49.707 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:49.966 { 00:18:49.966 "cntlid": 113, 00:18:49.966 "qid": 0, 00:18:49.966 "state": "enabled", 00:18:49.966 "listen_address": { 00:18:49.966 "trtype": "TCP", 00:18:49.966 "adrfam": "IPv4", 00:18:49.966 "traddr": "10.0.0.2", 00:18:49.966 "trsvcid": "4420" 00:18:49.966 }, 00:18:49.966 "peer_address": { 00:18:49.966 "trtype": "TCP", 00:18:49.966 "adrfam": "IPv4", 00:18:49.966 "traddr": "10.0.0.1", 00:18:49.966 "trsvcid": "47010" 00:18:49.966 }, 00:18:49.966 "auth": { 00:18:49.966 "state": "completed", 00:18:49.966 "digest": "sha512", 00:18:49.966 "dhgroup": "ffdhe3072" 00:18:49.966 } 00:18:49.966 } 00:18:49.966 ]' 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.966 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.224 12:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:50.800 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.800 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:50.800 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.800 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.800 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.800 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:50.800 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:50.800 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:51.089 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:51.089 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:51.359 { 00:18:51.359 "cntlid": 115, 00:18:51.359 "qid": 0, 00:18:51.359 "state": "enabled", 00:18:51.359 "listen_address": { 00:18:51.359 "trtype": "TCP", 00:18:51.359 "adrfam": "IPv4", 00:18:51.359 "traddr": "10.0.0.2", 00:18:51.359 "trsvcid": "4420" 00:18:51.359 }, 00:18:51.359 "peer_address": { 00:18:51.359 "trtype": "TCP", 00:18:51.359 "adrfam": "IPv4", 00:18:51.359 "traddr": "10.0.0.1", 00:18:51.359 "trsvcid": "47022" 00:18:51.359 }, 00:18:51.359 "auth": { 00:18:51.359 "state": "completed", 00:18:51.359 "digest": "sha512", 00:18:51.359 "dhgroup": "ffdhe3072" 00:18:51.359 } 00:18:51.359 } 00:18:51.359 ]' 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:51.359 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:51.617 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.617 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.618 12:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.618 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:52.184 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.184 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:52.184 12:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.184 12:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.184 12:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.184 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:52.184 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:52.184 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:52.443 12:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:52.703 00:18:52.703 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:52.703 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.703 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:52.960 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.960 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.960 12:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.960 12:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.960 12:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.960 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:52.960 { 00:18:52.960 "cntlid": 117, 00:18:52.960 "qid": 0, 00:18:52.960 "state": "enabled", 00:18:52.960 "listen_address": { 00:18:52.960 "trtype": "TCP", 00:18:52.960 "adrfam": "IPv4", 00:18:52.960 "traddr": "10.0.0.2", 00:18:52.960 "trsvcid": "4420" 00:18:52.960 }, 00:18:52.960 "peer_address": { 00:18:52.960 "trtype": "TCP", 00:18:52.960 "adrfam": "IPv4", 00:18:52.960 "traddr": "10.0.0.1", 00:18:52.960 "trsvcid": "47044" 00:18:52.960 }, 00:18:52.960 "auth": { 00:18:52.960 "state": "completed", 00:18:52.960 "digest": "sha512", 00:18:52.960 "dhgroup": "ffdhe3072" 00:18:52.960 } 00:18:52.960 } 00:18:52.960 ]' 00:18:52.960 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:52.961 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.961 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:52.961 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.961 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:52.961 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.961 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.961 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.219 12:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.786 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.045 00:18:54.045 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:54.045 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.045 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:54.303 { 00:18:54.303 "cntlid": 119, 00:18:54.303 "qid": 0, 00:18:54.303 "state": "enabled", 00:18:54.303 "listen_address": { 00:18:54.303 "trtype": "TCP", 00:18:54.303 "adrfam": "IPv4", 00:18:54.303 "traddr": "10.0.0.2", 00:18:54.303 "trsvcid": "4420" 00:18:54.303 }, 00:18:54.303 "peer_address": { 00:18:54.303 "trtype": "TCP", 00:18:54.303 "adrfam": "IPv4", 00:18:54.303 "traddr": "10.0.0.1", 00:18:54.303 "trsvcid": "47064" 00:18:54.303 }, 00:18:54.303 "auth": { 00:18:54.303 "state": "completed", 00:18:54.303 "digest": "sha512", 00:18:54.303 "dhgroup": "ffdhe3072" 00:18:54.303 } 00:18:54.303 } 00:18:54.303 ]' 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.303 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:54.562 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.562 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:54.562 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.562 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.562 12:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.562 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:55.131 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.389 12:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.648 00:18:55.648 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.648 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:55.648 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:55.907 { 00:18:55.907 "cntlid": 121, 00:18:55.907 "qid": 0, 00:18:55.907 "state": "enabled", 00:18:55.907 "listen_address": { 00:18:55.907 "trtype": "TCP", 00:18:55.907 "adrfam": "IPv4", 00:18:55.907 "traddr": "10.0.0.2", 00:18:55.907 "trsvcid": "4420" 00:18:55.907 }, 00:18:55.907 "peer_address": { 00:18:55.907 "trtype": "TCP", 00:18:55.907 "adrfam": "IPv4", 00:18:55.907 "traddr": "10.0.0.1", 00:18:55.907 "trsvcid": "47094" 00:18:55.907 }, 00:18:55.907 "auth": { 00:18:55.907 "state": "completed", 00:18:55.907 "digest": "sha512", 00:18:55.907 "dhgroup": "ffdhe4096" 00:18:55.907 } 00:18:55.907 } 00:18:55.907 ]' 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.907 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.165 12:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:18:56.731 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.731 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:56.731 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.731 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.731 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.731 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:56.731 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.731 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:56.990 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:57.248 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:57.249 { 00:18:57.249 "cntlid": 123, 00:18:57.249 "qid": 0, 00:18:57.249 "state": "enabled", 00:18:57.249 "listen_address": { 00:18:57.249 "trtype": "TCP", 00:18:57.249 "adrfam": "IPv4", 00:18:57.249 "traddr": "10.0.0.2", 00:18:57.249 "trsvcid": "4420" 00:18:57.249 }, 00:18:57.249 "peer_address": { 00:18:57.249 "trtype": "TCP", 00:18:57.249 "adrfam": "IPv4", 00:18:57.249 "traddr": "10.0.0.1", 00:18:57.249 "trsvcid": "47122" 00:18:57.249 }, 00:18:57.249 "auth": { 00:18:57.249 "state": "completed", 00:18:57.249 "digest": "sha512", 00:18:57.249 "dhgroup": "ffdhe4096" 00:18:57.249 } 00:18:57.249 } 00:18:57.249 ]' 00:18:57.249 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:57.507 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.507 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:57.507 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.507 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:57.507 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.507 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.507 12:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.765 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:58.332 12:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:58.590 00:18:58.590 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:58.590 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.590 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:58.848 { 00:18:58.848 "cntlid": 125, 00:18:58.848 "qid": 0, 00:18:58.848 "state": "enabled", 00:18:58.848 "listen_address": { 00:18:58.848 "trtype": "TCP", 00:18:58.848 "adrfam": "IPv4", 00:18:58.848 "traddr": "10.0.0.2", 00:18:58.848 "trsvcid": "4420" 00:18:58.848 }, 00:18:58.848 "peer_address": { 00:18:58.848 "trtype": "TCP", 00:18:58.848 "adrfam": "IPv4", 00:18:58.848 "traddr": "10.0.0.1", 00:18:58.848 "trsvcid": "47164" 00:18:58.848 }, 00:18:58.848 "auth": { 00:18:58.848 "state": "completed", 00:18:58.848 "digest": "sha512", 00:18:58.848 "dhgroup": "ffdhe4096" 00:18:58.848 } 00:18:58.848 } 00:18:58.848 ]' 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:58.848 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:59.106 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.106 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.106 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.106 12:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:18:59.673 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.673 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:59.673 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.673 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.673 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.673 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:59.673 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.673 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.931 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.190 00:19:00.190 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:00.190 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:00.190 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:00.448 { 00:19:00.448 "cntlid": 127, 00:19:00.448 "qid": 0, 00:19:00.448 "state": "enabled", 00:19:00.448 "listen_address": { 00:19:00.448 "trtype": "TCP", 00:19:00.448 "adrfam": "IPv4", 00:19:00.448 "traddr": "10.0.0.2", 00:19:00.448 "trsvcid": "4420" 00:19:00.448 }, 00:19:00.448 "peer_address": { 00:19:00.448 "trtype": "TCP", 00:19:00.448 "adrfam": "IPv4", 00:19:00.448 "traddr": "10.0.0.1", 00:19:00.448 "trsvcid": "49506" 00:19:00.448 }, 00:19:00.448 "auth": { 00:19:00.448 "state": "completed", 00:19:00.448 "digest": "sha512", 00:19:00.448 "dhgroup": "ffdhe4096" 00:19:00.448 } 00:19:00.448 } 00:19:00.448 ]' 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.448 12:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.707 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:19:01.273 12:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.274 12:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.532 12:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.532 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:01.532 12:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:01.792 00:19:01.792 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:01.792 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:01.792 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:02.051 { 00:19:02.051 "cntlid": 129, 00:19:02.051 "qid": 0, 00:19:02.051 "state": "enabled", 00:19:02.051 "listen_address": { 00:19:02.051 "trtype": "TCP", 00:19:02.051 "adrfam": "IPv4", 00:19:02.051 "traddr": "10.0.0.2", 00:19:02.051 "trsvcid": "4420" 00:19:02.051 }, 00:19:02.051 "peer_address": { 00:19:02.051 "trtype": "TCP", 00:19:02.051 "adrfam": "IPv4", 00:19:02.051 "traddr": "10.0.0.1", 00:19:02.051 "trsvcid": "49532" 00:19:02.051 }, 00:19:02.051 "auth": { 00:19:02.051 "state": "completed", 00:19:02.051 "digest": "sha512", 00:19:02.051 "dhgroup": "ffdhe6144" 00:19:02.051 } 00:19:02.051 } 00:19:02.051 ]' 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.051 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.311 12:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:02.880 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:03.449 00:19:03.449 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:03.449 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:03.449 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.449 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.449 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.449 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.450 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.450 12:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.450 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:03.450 { 00:19:03.450 "cntlid": 131, 00:19:03.450 "qid": 0, 00:19:03.450 "state": "enabled", 00:19:03.450 "listen_address": { 00:19:03.450 "trtype": "TCP", 00:19:03.450 "adrfam": "IPv4", 00:19:03.450 "traddr": "10.0.0.2", 00:19:03.450 "trsvcid": "4420" 00:19:03.450 }, 00:19:03.450 "peer_address": { 00:19:03.450 "trtype": "TCP", 00:19:03.450 "adrfam": "IPv4", 00:19:03.450 "traddr": "10.0.0.1", 00:19:03.450 "trsvcid": "49546" 00:19:03.450 }, 00:19:03.450 "auth": { 00:19:03.450 "state": "completed", 00:19:03.450 "digest": "sha512", 00:19:03.450 "dhgroup": "ffdhe6144" 00:19:03.450 } 00:19:03.450 } 00:19:03.450 ]' 00:19:03.450 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:03.450 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.450 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:03.710 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:03.710 12:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:03.710 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.710 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.710 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.710 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:19:04.340 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.340 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:04.340 12:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.340 12:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.340 12:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.340 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:04.340 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:04.340 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.599 12:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.857 00:19:04.857 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:04.857 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:04.857 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:05.115 { 00:19:05.115 "cntlid": 133, 00:19:05.115 "qid": 0, 00:19:05.115 "state": "enabled", 00:19:05.115 "listen_address": { 00:19:05.115 "trtype": "TCP", 00:19:05.115 "adrfam": "IPv4", 00:19:05.115 "traddr": "10.0.0.2", 00:19:05.115 "trsvcid": "4420" 00:19:05.115 }, 00:19:05.115 "peer_address": { 00:19:05.115 "trtype": "TCP", 00:19:05.115 "adrfam": "IPv4", 00:19:05.115 "traddr": "10.0.0.1", 00:19:05.115 "trsvcid": "49570" 00:19:05.115 }, 00:19:05.115 "auth": { 00:19:05.115 "state": "completed", 00:19:05.115 "digest": "sha512", 00:19:05.115 "dhgroup": "ffdhe6144" 00:19:05.115 } 00:19:05.115 } 00:19:05.115 ]' 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.115 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.374 12:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:19:05.942 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.942 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:05.942 12:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.942 12:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.942 12:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.942 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:05.943 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.943 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.202 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.461 00:19:06.461 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:06.461 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:06.461 12:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:06.721 { 00:19:06.721 "cntlid": 135, 00:19:06.721 "qid": 0, 00:19:06.721 "state": "enabled", 00:19:06.721 "listen_address": { 00:19:06.721 "trtype": "TCP", 00:19:06.721 "adrfam": "IPv4", 00:19:06.721 "traddr": "10.0.0.2", 00:19:06.721 "trsvcid": "4420" 00:19:06.721 }, 00:19:06.721 "peer_address": { 00:19:06.721 "trtype": "TCP", 00:19:06.721 "adrfam": "IPv4", 00:19:06.721 "traddr": "10.0.0.1", 00:19:06.721 "trsvcid": "49608" 00:19:06.721 }, 00:19:06.721 "auth": { 00:19:06.721 "state": "completed", 00:19:06.721 "digest": "sha512", 00:19:06.721 "dhgroup": "ffdhe6144" 00:19:06.721 } 00:19:06.721 } 00:19:06.721 ]' 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.721 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.980 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:07.550 12:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:07.550 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:08.118 00:19:08.118 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:08.118 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:08.118 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:08.378 { 00:19:08.378 "cntlid": 137, 00:19:08.378 "qid": 0, 00:19:08.378 "state": "enabled", 00:19:08.378 "listen_address": { 00:19:08.378 "trtype": "TCP", 00:19:08.378 "adrfam": "IPv4", 00:19:08.378 "traddr": "10.0.0.2", 00:19:08.378 "trsvcid": "4420" 00:19:08.378 }, 00:19:08.378 "peer_address": { 00:19:08.378 "trtype": "TCP", 00:19:08.378 "adrfam": "IPv4", 00:19:08.378 "traddr": "10.0.0.1", 00:19:08.378 "trsvcid": "49636" 00:19:08.378 }, 00:19:08.378 "auth": { 00:19:08.378 "state": "completed", 00:19:08.378 "digest": "sha512", 00:19:08.378 "dhgroup": "ffdhe8192" 00:19:08.378 } 00:19:08.378 } 00:19:08.378 ]' 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.378 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.637 12:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:09.206 12:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:09.775 00:19:09.775 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:09.775 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.775 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:10.035 { 00:19:10.035 "cntlid": 139, 00:19:10.035 "qid": 0, 00:19:10.035 "state": "enabled", 00:19:10.035 "listen_address": { 00:19:10.035 "trtype": "TCP", 00:19:10.035 "adrfam": "IPv4", 00:19:10.035 "traddr": "10.0.0.2", 00:19:10.035 "trsvcid": "4420" 00:19:10.035 }, 00:19:10.035 "peer_address": { 00:19:10.035 "trtype": "TCP", 00:19:10.035 "adrfam": "IPv4", 00:19:10.035 "traddr": "10.0.0.1", 00:19:10.035 "trsvcid": "49086" 00:19:10.035 }, 00:19:10.035 "auth": { 00:19:10.035 "state": "completed", 00:19:10.035 "digest": "sha512", 00:19:10.035 "dhgroup": "ffdhe8192" 00:19:10.035 } 00:19:10.035 } 00:19:10.035 ]' 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.035 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.295 12:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YjU0NjQwODgwY2MyZThiNTUyMjE1MDY3MWY4Mzc4YmP5cPFV: 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.864 12:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.123 12:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.123 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.123 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.382 00:19:11.382 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:11.382 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.382 12:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:11.647 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.647 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:11.648 { 00:19:11.648 "cntlid": 141, 00:19:11.648 "qid": 0, 00:19:11.648 "state": "enabled", 00:19:11.648 "listen_address": { 00:19:11.648 "trtype": "TCP", 00:19:11.648 "adrfam": "IPv4", 00:19:11.648 "traddr": "10.0.0.2", 00:19:11.648 "trsvcid": "4420" 00:19:11.648 }, 00:19:11.648 "peer_address": { 00:19:11.648 "trtype": "TCP", 00:19:11.648 "adrfam": "IPv4", 00:19:11.648 "traddr": "10.0.0.1", 00:19:11.648 "trsvcid": "49106" 00:19:11.648 }, 00:19:11.648 "auth": { 00:19:11.648 "state": "completed", 00:19:11.648 "digest": "sha512", 00:19:11.648 "dhgroup": "ffdhe8192" 00:19:11.648 } 00:19:11.648 } 00:19:11.648 ]' 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.648 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:11.906 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.906 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.906 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.906 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDQzYzIyOTRkM2I0YzdmM2Y1OGJjMzk1ZmI1ZTU2MDQyYzczN2VlYWU5NWY3YWQ5N+mmBA==: 00:19:12.475 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.475 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:12.475 12:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.475 12:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.475 12:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.475 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:12.475 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.475 12:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.734 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:19:12.734 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:12.734 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.734 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:12.735 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.735 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:12.735 12:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.735 12:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.735 12:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.735 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.735 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.303 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:13.303 { 00:19:13.303 "cntlid": 143, 00:19:13.303 "qid": 0, 00:19:13.303 "state": "enabled", 00:19:13.303 "listen_address": { 00:19:13.303 "trtype": "TCP", 00:19:13.303 "adrfam": "IPv4", 00:19:13.303 "traddr": "10.0.0.2", 00:19:13.303 "trsvcid": "4420" 00:19:13.303 }, 00:19:13.303 "peer_address": { 00:19:13.303 "trtype": "TCP", 00:19:13.303 "adrfam": "IPv4", 00:19:13.303 "traddr": "10.0.0.1", 00:19:13.303 "trsvcid": "49132" 00:19:13.303 }, 00:19:13.303 "auth": { 00:19:13.303 "state": "completed", 00:19:13.303 "digest": "sha512", 00:19:13.303 "dhgroup": "ffdhe8192" 00:19:13.303 } 00:19:13.303 } 00:19:13.303 ]' 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.303 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:13.561 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.561 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:13.561 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.561 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.561 12:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.561 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:ODJiYzRlMmRhMTgyN2U5YTYwNWI2YTA2OGFiNDY5OTJiMTIyOTExODA2ZTdjNjNhYmUxM2U4ZDJhOGE2NjczYwJOEGo=: 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.127 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:14.385 12:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:14.952 00:19:14.952 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:14.952 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:14.952 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.952 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.952 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.952 12:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.952 12:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.209 12:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:15.210 { 00:19:15.210 "cntlid": 145, 00:19:15.210 "qid": 0, 00:19:15.210 "state": "enabled", 00:19:15.210 "listen_address": { 00:19:15.210 "trtype": "TCP", 00:19:15.210 "adrfam": "IPv4", 00:19:15.210 "traddr": "10.0.0.2", 00:19:15.210 "trsvcid": "4420" 00:19:15.210 }, 00:19:15.210 "peer_address": { 00:19:15.210 "trtype": "TCP", 00:19:15.210 "adrfam": "IPv4", 00:19:15.210 "traddr": "10.0.0.1", 00:19:15.210 "trsvcid": "49152" 00:19:15.210 }, 00:19:15.210 "auth": { 00:19:15.210 "state": "completed", 00:19:15.210 "digest": "sha512", 00:19:15.210 "dhgroup": "ffdhe8192" 00:19:15.210 } 00:19:15.210 } 00:19:15.210 ]' 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.210 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.468 12:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mzg4NzBlNGRiYTcwNTE2ZDI3YWYxMGMxZDUxNWMwOTJkZjIzOGZjYjk0YTJjNTYx3bTwpw==: 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:16.035 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:16.291 request: 00:19:16.291 { 00:19:16.291 "name": "nvme0", 00:19:16.291 "trtype": "tcp", 00:19:16.291 "traddr": "10.0.0.2", 00:19:16.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:16.291 "adrfam": "ipv4", 00:19:16.291 "trsvcid": "4420", 00:19:16.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:16.291 "dhchap_key": "key2", 00:19:16.291 "method": "bdev_nvme_attach_controller", 00:19:16.291 "req_id": 1 00:19:16.291 } 00:19:16.291 Got JSON-RPC error response 00:19:16.291 response: 00:19:16.291 { 00:19:16.291 "code": -32602, 00:19:16.291 "message": "Invalid parameters" 00:19:16.291 } 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.291 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:19:16.292 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:19:16.292 12:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2128751 00:19:16.292 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2128751 ']' 00:19:16.292 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2128751 00:19:16.292 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:19:16.292 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:16.292 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2128751 00:19:16.548 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:16.548 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:16.548 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2128751' 00:19:16.548 killing process with pid 2128751 00:19:16.548 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2128751 00:19:16.548 12:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2128751 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.806 rmmod nvme_tcp 00:19:16.806 rmmod nvme_fabrics 00:19:16.806 rmmod nvme_keyring 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2128475 ']' 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2128475 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2128475 ']' 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2128475 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2128475 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2128475' 00:19:16.806 killing process with pid 2128475 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2128475 00:19:16.806 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2128475 00:19:17.067 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:17.067 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:17.067 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:17.067 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.067 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:17.067 12:20:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.067 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.067 12:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.596 12:20:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:19.596 12:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.V9i /tmp/spdk.key-sha256.GFx /tmp/spdk.key-sha384.Fvp /tmp/spdk.key-sha512.Vu0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:19.596 00:19:19.596 real 2m4.392s 00:19:19.596 user 4m34.987s 00:19:19.596 sys 0m28.046s 00:19:19.596 12:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:19.596 12:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.596 ************************************ 00:19:19.596 END TEST nvmf_auth_target 00:19:19.596 ************************************ 00:19:19.596 12:20:47 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:19.596 12:20:47 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:19.596 12:20:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:19:19.596 12:20:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:19.596 12:20:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:19.596 ************************************ 00:19:19.596 START TEST nvmf_bdevio_no_huge 00:19:19.596 ************************************ 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:19.596 * Looking for test storage... 00:19:19.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.596 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:19.597 12:20:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:26.154 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:26.154 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.154 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:26.155 Found net devices under 0000:af:00.0: cvl_0_0 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:26.155 Found net devices under 0000:af:00.1: cvl_0_1 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:19:26.155 00:19:26.155 --- 10.0.0.2 ping statistics --- 00:19:26.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.155 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:19:26.155 00:19:26.155 --- 10.0.0.1 ping statistics --- 00:19:26.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.155 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2152827 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2152827 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 2152827 ']' 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:26.155 12:20:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.155 [2024-05-15 12:20:54.639233] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:19:26.155 [2024-05-15 12:20:54.639280] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:26.414 [2024-05-15 12:20:54.719663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.414 [2024-05-15 12:20:54.815369] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.414 [2024-05-15 12:20:54.815403] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.414 [2024-05-15 12:20:54.815412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.414 [2024-05-15 12:20:54.815420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.414 [2024-05-15 12:20:54.815427] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.414 [2024-05-15 12:20:54.815541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:26.414 [2024-05-15 12:20:54.815653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:26.414 [2024-05-15 12:20:54.815763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.414 [2024-05-15 12:20:54.815765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.982 [2024-05-15 12:20:55.479431] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.982 Malloc0 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.982 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:27.243 [2024-05-15 12:20:55.515932] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:27.243 [2024-05-15 12:20:55.516146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.243 { 00:19:27.243 "params": { 00:19:27.243 "name": "Nvme$subsystem", 00:19:27.243 "trtype": "$TEST_TRANSPORT", 00:19:27.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.243 "adrfam": "ipv4", 00:19:27.243 "trsvcid": "$NVMF_PORT", 00:19:27.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.243 "hdgst": ${hdgst:-false}, 00:19:27.243 "ddgst": ${ddgst:-false} 00:19:27.243 }, 00:19:27.243 "method": "bdev_nvme_attach_controller" 00:19:27.243 } 00:19:27.243 EOF 00:19:27.243 )") 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:27.243 12:20:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:27.243 "params": { 00:19:27.243 "name": "Nvme1", 00:19:27.243 "trtype": "tcp", 00:19:27.243 "traddr": "10.0.0.2", 00:19:27.243 "adrfam": "ipv4", 00:19:27.243 "trsvcid": "4420", 00:19:27.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.243 "hdgst": false, 00:19:27.243 "ddgst": false 00:19:27.243 }, 00:19:27.243 "method": "bdev_nvme_attach_controller" 00:19:27.243 }' 00:19:27.243 [2024-05-15 12:20:55.565508] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:19:27.243 [2024-05-15 12:20:55.565555] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2153099 ] 00:19:27.243 [2024-05-15 12:20:55.640429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.243 [2024-05-15 12:20:55.740769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.243 [2024-05-15 12:20:55.740860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.243 [2024-05-15 12:20:55.740862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.502 I/O targets: 00:19:27.502 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:27.502 00:19:27.502 00:19:27.502 CUnit - A unit testing framework for C - Version 2.1-3 00:19:27.502 http://cunit.sourceforge.net/ 00:19:27.502 00:19:27.502 00:19:27.502 Suite: bdevio tests on: Nvme1n1 00:19:27.502 Test: blockdev write read block ...passed 00:19:27.502 Test: blockdev write zeroes read block ...passed 00:19:27.502 Test: blockdev write zeroes read no split ...passed 00:19:27.761 Test: blockdev write zeroes read split ...passed 00:19:27.761 Test: blockdev write zeroes read split partial ...passed 00:19:27.761 Test: blockdev reset ...[2024-05-15 12:20:56.142705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:27.761 [2024-05-15 12:20:56.142763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1884910 (9): Bad file descriptor 00:19:27.761 [2024-05-15 12:20:56.172365] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:27.761 passed 00:19:27.761 Test: blockdev write read 8 blocks ...passed 00:19:27.761 Test: blockdev write read size > 128k ...passed 00:19:27.761 Test: blockdev write read invalid size ...passed 00:19:27.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:27.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:27.761 Test: blockdev write read max offset ...passed 00:19:28.020 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:28.020 Test: blockdev writev readv 8 blocks ...passed 00:19:28.020 Test: blockdev writev readv 30 x 1block ...passed 00:19:28.020 Test: blockdev writev readv block ...passed 00:19:28.020 Test: blockdev writev readv size > 128k ...passed 00:19:28.020 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:28.020 Test: blockdev comparev and writev ...[2024-05-15 12:20:56.447659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.020 [2024-05-15 12:20:56.447690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.447706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.020 [2024-05-15 12:20:56.447717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.448153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.020 [2024-05-15 12:20:56.448166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.448180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.020 [2024-05-15 12:20:56.448196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.448637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.020 [2024-05-15 12:20:56.448650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.448664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.020 [2024-05-15 12:20:56.448675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.449111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.020 [2024-05-15 12:20:56.449123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.449137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:28.020 [2024-05-15 12:20:56.449147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:28.020 passed 00:19:28.020 Test: blockdev nvme passthru rw ...passed 00:19:28.020 Test: blockdev nvme passthru vendor specific ...[2024-05-15 12:20:56.533938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:28.020 [2024-05-15 12:20:56.533956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.534277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:28.020 [2024-05-15 12:20:56.534290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.534606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:28.020 [2024-05-15 12:20:56.534618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:28.020 [2024-05-15 12:20:56.534939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:28.020 [2024-05-15 12:20:56.534951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:28.020 passed 00:19:28.279 Test: blockdev nvme admin passthru ...passed 00:19:28.279 Test: blockdev copy ...passed 00:19:28.279 00:19:28.279 Run Summary: Type Total Ran Passed Failed Inactive 00:19:28.279 suites 1 1 n/a 0 0 00:19:28.279 tests 23 23 23 0 0 00:19:28.279 asserts 152 152 152 0 n/a 00:19:28.279 00:19:28.279 Elapsed time = 1.407 seconds 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:28.539 rmmod nvme_tcp 00:19:28.539 rmmod nvme_fabrics 00:19:28.539 rmmod nvme_keyring 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2152827 ']' 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2152827 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 2152827 ']' 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 2152827 00:19:28.539 12:20:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:19:28.539 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:28.539 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2152827 00:19:28.539 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:19:28.539 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:19:28.539 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2152827' 00:19:28.539 killing process with pid 2152827 00:19:28.539 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 2152827 00:19:28.539 [2024-05-15 12:20:57.054783] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:28.539 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 2152827 00:19:29.108 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.108 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.108 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.108 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.108 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.108 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.108 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.108 12:20:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.014 12:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:31.014 00:19:31.014 real 0m11.844s 00:19:31.014 user 0m14.199s 00:19:31.014 sys 0m6.298s 00:19:31.014 12:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:31.014 12:20:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:31.014 ************************************ 00:19:31.014 END TEST nvmf_bdevio_no_huge 00:19:31.014 ************************************ 00:19:31.272 12:20:59 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:31.272 12:20:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:31.272 12:20:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:31.272 12:20:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:31.272 ************************************ 00:19:31.272 START TEST nvmf_tls 00:19:31.272 ************************************ 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:31.272 * Looking for test storage... 00:19:31.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.272 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:31.273 12:20:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:39.388 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:39.388 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.388 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:39.389 Found net devices under 0000:af:00.0: cvl_0_0 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:39.389 Found net devices under 0000:af:00.1: cvl_0_1 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:39.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:19:39.389 00:19:39.389 --- 10.0.0.2 ping statistics --- 00:19:39.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.389 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:19:39.389 00:19:39.389 --- 10.0.0.1 ping statistics --- 00:19:39.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.389 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2157623 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2157623 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2157623 ']' 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:39.389 12:21:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.389 [2024-05-15 12:21:06.823631] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:19:39.389 [2024-05-15 12:21:06.823684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.389 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.389 [2024-05-15 12:21:06.898493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.389 [2024-05-15 12:21:06.970955] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.389 [2024-05-15 12:21:06.970994] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.389 [2024-05-15 12:21:06.971003] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.389 [2024-05-15 12:21:06.971011] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.389 [2024-05-15 12:21:06.971019] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.389 [2024-05-15 12:21:06.971040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:39.389 true 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.389 12:21:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:39.647 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:39.647 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:39.647 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:39.904 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:39.904 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:39.904 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:39.904 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:39.904 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:40.181 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.181 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:40.181 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:40.181 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:40.181 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.181 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:40.452 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:40.452 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:40.452 12:21:08 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:40.710 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.710 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:40.710 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:40.710 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:40.710 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:40.968 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:40.968 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ta5dgaKo65 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.HwU5N4nqbG 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ta5dgaKo65 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.HwU5N4nqbG 00:19:41.226 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:41.484 12:21:09 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:41.742 12:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ta5dgaKo65 00:19:41.742 12:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ta5dgaKo65 00:19:41.742 12:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.742 [2024-05-15 12:21:10.221928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.742 12:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:42.001 12:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:42.259 [2024-05-15 12:21:10.562794] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:42.259 [2024-05-15 12:21:10.562847] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.259 [2024-05-15 12:21:10.563036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.259 12:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.259 malloc0 00:19:42.259 12:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.518 12:21:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ta5dgaKo65 00:19:42.776 [2024-05-15 12:21:11.096721] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:42.776 12:21:11 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ta5dgaKo65 00:19:42.776 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.760 Initializing NVMe Controllers 00:19:52.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:52.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:52.760 Initialization complete. Launching workers. 00:19:52.760 ======================================================== 00:19:52.760 Latency(us) 00:19:52.760 Device Information : IOPS MiB/s Average min max 00:19:52.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16497.75 64.44 3879.70 789.05 5042.85 00:19:52.760 ======================================================== 00:19:52.760 Total : 16497.75 64.44 3879.70 789.05 5042.85 00:19:52.760 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ta5dgaKo65 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ta5dgaKo65' 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2160075 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2160075 /var/tmp/bdevperf.sock 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2160075 ']' 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:52.760 12:21:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.760 [2024-05-15 12:21:21.263888] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:19:52.760 [2024-05-15 12:21:21.263944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160075 ] 00:19:53.018 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.018 [2024-05-15 12:21:21.330038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.018 [2024-05-15 12:21:21.399255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.584 12:21:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:53.584 12:21:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:19:53.584 12:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ta5dgaKo65 00:19:53.843 [2024-05-15 12:21:22.212844] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.843 [2024-05-15 12:21:22.212931] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:53.843 TLSTESTn1 00:19:53.843 12:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.101 Running I/O for 10 seconds... 00:20:04.074 00:20:04.074 Latency(us) 00:20:04.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.074 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.074 Verification LBA range: start 0x0 length 0x2000 00:20:04.075 TLSTESTn1 : 10.06 1831.19 7.15 0.00 0.00 69721.41 5190.45 120795.96 00:20:04.075 =================================================================================================================== 00:20:04.075 Total : 1831.19 7.15 0.00 0.00 69721.41 5190.45 120795.96 00:20:04.075 0 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2160075 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2160075 ']' 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2160075 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2160075 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2160075' 00:20:04.075 killing process with pid 2160075 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2160075 00:20:04.075 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.075 00:20:04.075 Latency(us) 00:20:04.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.075 =================================================================================================================== 00:20:04.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.075 [2024-05-15 12:21:32.544837] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:04.075 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2160075 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HwU5N4nqbG 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HwU5N4nqbG 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HwU5N4nqbG 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HwU5N4nqbG' 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2162047 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2162047 /var/tmp/bdevperf.sock 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2162047 ']' 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:04.333 12:21:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.333 [2024-05-15 12:21:32.800149] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:04.333 [2024-05-15 12:21:32.800211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162047 ] 00:20:04.333 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.592 [2024-05-15 12:21:32.869069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.592 [2024-05-15 12:21:32.941263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.160 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:05.160 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:05.160 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HwU5N4nqbG 00:20:05.418 [2024-05-15 12:21:33.719195] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.419 [2024-05-15 12:21:33.719276] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:05.419 [2024-05-15 12:21:33.724108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:05.419 [2024-05-15 12:21:33.724657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b6610 (107): Transport endpoint is not connected 00:20:05.419 [2024-05-15 12:21:33.725648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b6610 (9): Bad file descriptor 00:20:05.419 [2024-05-15 12:21:33.726650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.419 [2024-05-15 12:21:33.726661] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:05.419 [2024-05-15 12:21:33.726672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.419 request: 00:20:05.419 { 00:20:05.419 "name": "TLSTEST", 00:20:05.419 "trtype": "tcp", 00:20:05.419 "traddr": "10.0.0.2", 00:20:05.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.419 "adrfam": "ipv4", 00:20:05.419 "trsvcid": "4420", 00:20:05.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.419 "psk": "/tmp/tmp.HwU5N4nqbG", 00:20:05.419 "method": "bdev_nvme_attach_controller", 00:20:05.419 "req_id": 1 00:20:05.419 } 00:20:05.419 Got JSON-RPC error response 00:20:05.419 response: 00:20:05.419 { 00:20:05.419 "code": -32602, 00:20:05.419 "message": "Invalid parameters" 00:20:05.419 } 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2162047 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2162047 ']' 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2162047 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2162047 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2162047' 00:20:05.419 killing process with pid 2162047 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2162047 00:20:05.419 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.419 00:20:05.419 Latency(us) 00:20:05.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.419 =================================================================================================================== 00:20:05.419 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:05.419 [2024-05-15 12:21:33.797001] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:05.419 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2162047 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ta5dgaKo65 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ta5dgaKo65 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ta5dgaKo65 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ta5dgaKo65' 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2162210 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2162210 /var/tmp/bdevperf.sock 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2162210 ']' 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:05.678 12:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.678 [2024-05-15 12:21:34.038766] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:05.678 [2024-05-15 12:21:34.038820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162210 ] 00:20:05.678 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.678 [2024-05-15 12:21:34.106167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.678 [2024-05-15 12:21:34.172252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.630 12:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:06.630 12:21:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:06.630 12:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ta5dgaKo65 00:20:06.630 [2024-05-15 12:21:34.990862] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.630 [2024-05-15 12:21:34.990960] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:06.630 [2024-05-15 12:21:34.996664] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:06.630 [2024-05-15 12:21:34.996689] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:06.630 [2024-05-15 12:21:34.996735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:06.630 [2024-05-15 12:21:34.997381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x879610 (107): Transport endpoint is not connected 00:20:06.630 [2024-05-15 12:21:34.998373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x879610 (9): Bad file descriptor 00:20:06.630 [2024-05-15 12:21:34.999374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.630 [2024-05-15 12:21:34.999386] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:06.630 [2024-05-15 12:21:34.999398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.630 request: 00:20:06.630 { 00:20:06.630 "name": "TLSTEST", 00:20:06.630 "trtype": "tcp", 00:20:06.630 "traddr": "10.0.0.2", 00:20:06.630 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:06.630 "adrfam": "ipv4", 00:20:06.630 "trsvcid": "4420", 00:20:06.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.630 "psk": "/tmp/tmp.ta5dgaKo65", 00:20:06.630 "method": "bdev_nvme_attach_controller", 00:20:06.630 "req_id": 1 00:20:06.630 } 00:20:06.630 Got JSON-RPC error response 00:20:06.630 response: 00:20:06.630 { 00:20:06.630 "code": -32602, 00:20:06.630 "message": "Invalid parameters" 00:20:06.630 } 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2162210 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2162210 ']' 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2162210 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2162210 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2162210' 00:20:06.630 killing process with pid 2162210 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2162210 00:20:06.630 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.630 00:20:06.630 Latency(us) 00:20:06.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.630 =================================================================================================================== 00:20:06.630 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.630 [2024-05-15 12:21:35.071572] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:06.630 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2162210 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ta5dgaKo65 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ta5dgaKo65 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ta5dgaKo65 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ta5dgaKo65' 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2162489 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2162489 /var/tmp/bdevperf.sock 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2162489 ']' 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:06.900 12:21:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.900 [2024-05-15 12:21:35.313523] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:06.900 [2024-05-15 12:21:35.313577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162489 ] 00:20:06.900 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.900 [2024-05-15 12:21:35.379305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.159 [2024-05-15 12:21:35.444285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.725 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:07.725 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:07.725 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ta5dgaKo65 00:20:07.983 [2024-05-15 12:21:36.269960] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.983 [2024-05-15 12:21:36.270054] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:07.983 [2024-05-15 12:21:36.279913] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:07.983 [2024-05-15 12:21:36.279937] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:07.983 [2024-05-15 12:21:36.279967] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:07.983 [2024-05-15 12:21:36.280469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da610 (107): Transport endpoint is not connected 00:20:07.983 [2024-05-15 12:21:36.281461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21da610 (9): Bad file descriptor 00:20:07.983 [2024-05-15 12:21:36.282463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:07.983 [2024-05-15 12:21:36.282475] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:07.983 [2024-05-15 12:21:36.282487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:07.983 request: 00:20:07.983 { 00:20:07.983 "name": "TLSTEST", 00:20:07.983 "trtype": "tcp", 00:20:07.983 "traddr": "10.0.0.2", 00:20:07.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.983 "adrfam": "ipv4", 00:20:07.983 "trsvcid": "4420", 00:20:07.983 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:07.983 "psk": "/tmp/tmp.ta5dgaKo65", 00:20:07.983 "method": "bdev_nvme_attach_controller", 00:20:07.983 "req_id": 1 00:20:07.983 } 00:20:07.983 Got JSON-RPC error response 00:20:07.983 response: 00:20:07.983 { 00:20:07.983 "code": -32602, 00:20:07.983 "message": "Invalid parameters" 00:20:07.983 } 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2162489 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2162489 ']' 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2162489 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2162489 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2162489' 00:20:07.983 killing process with pid 2162489 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2162489 00:20:07.983 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.983 00:20:07.983 Latency(us) 00:20:07.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.983 =================================================================================================================== 00:20:07.983 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.983 [2024-05-15 12:21:36.355855] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:07.983 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2162489 00:20:08.242 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2162755 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2162755 /var/tmp/bdevperf.sock 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2162755 ']' 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:08.243 12:21:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.243 [2024-05-15 12:21:36.598938] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:08.243 [2024-05-15 12:21:36.598993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162755 ] 00:20:08.243 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.243 [2024-05-15 12:21:36.665160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.243 [2024-05-15 12:21:36.729340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:09.180 [2024-05-15 12:21:37.573237] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:09.180 [2024-05-15 12:21:37.574810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891cc0 (9): Bad file descriptor 00:20:09.180 [2024-05-15 12:21:37.575808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.180 [2024-05-15 12:21:37.575821] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:09.180 [2024-05-15 12:21:37.575832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.180 request: 00:20:09.180 { 00:20:09.180 "name": "TLSTEST", 00:20:09.180 "trtype": "tcp", 00:20:09.180 "traddr": "10.0.0.2", 00:20:09.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.180 "adrfam": "ipv4", 00:20:09.180 "trsvcid": "4420", 00:20:09.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.180 "method": "bdev_nvme_attach_controller", 00:20:09.180 "req_id": 1 00:20:09.180 } 00:20:09.180 Got JSON-RPC error response 00:20:09.180 response: 00:20:09.180 { 00:20:09.180 "code": -32602, 00:20:09.180 "message": "Invalid parameters" 00:20:09.180 } 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2162755 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2162755 ']' 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2162755 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2162755 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2162755' 00:20:09.180 killing process with pid 2162755 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2162755 00:20:09.180 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.180 00:20:09.180 Latency(us) 00:20:09.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.180 =================================================================================================================== 00:20:09.180 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.180 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2162755 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2157623 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2157623 ']' 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2157623 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2157623 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2157623' 00:20:09.440 killing process with pid 2157623 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2157623 00:20:09.440 [2024-05-15 12:21:37.893889] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:09.440 [2024-05-15 12:21:37.893928] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:09.440 12:21:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2157623 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.HAVg2QEXy6 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.HAVg2QEXy6 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2163044 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2163044 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2163044 ']' 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:09.700 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.700 [2024-05-15 12:21:38.210216] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:09.700 [2024-05-15 12:21:38.210268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.960 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.960 [2024-05-15 12:21:38.281422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.960 [2024-05-15 12:21:38.351825] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.960 [2024-05-15 12:21:38.351866] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.960 [2024-05-15 12:21:38.351875] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.960 [2024-05-15 12:21:38.351884] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.960 [2024-05-15 12:21:38.351891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.960 [2024-05-15 12:21:38.351917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.528 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:10.528 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:10.528 12:21:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.528 12:21:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:10.528 12:21:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.528 12:21:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.528 12:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.HAVg2QEXy6 00:20:10.528 12:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HAVg2QEXy6 00:20:10.528 12:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:10.786 [2024-05-15 12:21:39.202442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.786 12:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.043 12:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:11.043 [2024-05-15 12:21:39.551288] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:11.043 [2024-05-15 12:21:39.551360] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.043 [2024-05-15 12:21:39.551544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.043 12:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:11.301 malloc0 00:20:11.301 12:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.558 12:21:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAVg2QEXy6 00:20:11.559 [2024-05-15 12:21:40.068999] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:11.559 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HAVg2QEXy6 00:20:11.559 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:11.559 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:11.559 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:11.559 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HAVg2QEXy6' 00:20:11.559 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2163337 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2163337 /var/tmp/bdevperf.sock 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2163337 ']' 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:11.816 12:21:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.816 [2024-05-15 12:21:40.133559] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:11.816 [2024-05-15 12:21:40.133614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163337 ] 00:20:11.816 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.816 [2024-05-15 12:21:40.202302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.816 [2024-05-15 12:21:40.271565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.752 12:21:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:12.752 12:21:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:12.752 12:21:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAVg2QEXy6 00:20:12.752 [2024-05-15 12:21:41.086933] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.752 [2024-05-15 12:21:41.087028] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:12.752 TLSTESTn1 00:20:12.752 12:21:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:12.752 Running I/O for 10 seconds... 00:20:24.954 00:20:24.954 Latency(us) 00:20:24.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.954 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:24.954 Verification LBA range: start 0x0 length 0x2000 00:20:24.954 TLSTESTn1 : 10.06 1820.35 7.11 0.00 0.00 70125.11 6920.60 118279.37 00:20:24.954 =================================================================================================================== 00:20:24.954 Total : 1820.35 7.11 0.00 0.00 70125.11 6920.60 118279.37 00:20:24.954 0 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2163337 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2163337 ']' 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2163337 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2163337 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2163337' 00:20:24.954 killing process with pid 2163337 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2163337 00:20:24.954 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.954 00:20:24.954 Latency(us) 00:20:24.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.954 =================================================================================================================== 00:20:24.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.954 [2024-05-15 12:21:51.423281] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2163337 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.HAVg2QEXy6 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HAVg2QEXy6 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HAVg2QEXy6 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:24.954 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HAVg2QEXy6 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HAVg2QEXy6' 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2165348 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2165348 /var/tmp/bdevperf.sock 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2165348 ']' 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:24.955 12:21:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.955 [2024-05-15 12:21:51.679330] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:24.955 [2024-05-15 12:21:51.679387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165348 ] 00:20:24.955 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.955 [2024-05-15 12:21:51.745335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.955 [2024-05-15 12:21:51.820431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAVg2QEXy6 00:20:24.955 [2024-05-15 12:21:52.651218] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.955 [2024-05-15 12:21:52.651266] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:24.955 [2024-05-15 12:21:52.651275] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.HAVg2QEXy6 00:20:24.955 request: 00:20:24.955 { 00:20:24.955 "name": "TLSTEST", 00:20:24.955 "trtype": "tcp", 00:20:24.955 "traddr": "10.0.0.2", 00:20:24.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.955 "adrfam": "ipv4", 00:20:24.955 "trsvcid": "4420", 00:20:24.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.955 "psk": "/tmp/tmp.HAVg2QEXy6", 00:20:24.955 "method": "bdev_nvme_attach_controller", 00:20:24.955 "req_id": 1 00:20:24.955 } 00:20:24.955 Got JSON-RPC error response 00:20:24.955 response: 00:20:24.955 { 00:20:24.955 "code": -1, 00:20:24.955 "message": "Operation not permitted" 00:20:24.955 } 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2165348 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2165348 ']' 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2165348 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2165348 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2165348' 00:20:24.955 killing process with pid 2165348 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2165348 00:20:24.955 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.955 00:20:24.955 Latency(us) 00:20:24.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.955 =================================================================================================================== 00:20:24.955 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2165348 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2163044 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2163044 ']' 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2163044 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2163044 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2163044' 00:20:24.955 killing process with pid 2163044 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2163044 00:20:24.955 [2024-05-15 12:21:52.980763] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:24.955 [2024-05-15 12:21:52.980806] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:24.955 12:21:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2163044 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2165623 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2165623 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2165623 ']' 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:24.955 12:21:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.955 [2024-05-15 12:21:53.245087] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:24.955 [2024-05-15 12:21:53.245136] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.955 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.955 [2024-05-15 12:21:53.318445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.955 [2024-05-15 12:21:53.390182] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.955 [2024-05-15 12:21:53.390223] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.955 [2024-05-15 12:21:53.390232] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.955 [2024-05-15 12:21:53.390241] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.955 [2024-05-15 12:21:53.390248] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.955 [2024-05-15 12:21:53.390267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.523 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:25.523 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:25.523 12:21:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.523 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:25.523 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.HAVg2QEXy6 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.HAVg2QEXy6 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.HAVg2QEXy6 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HAVg2QEXy6 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:25.783 [2024-05-15 12:21:54.223608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.783 12:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:26.041 12:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:26.041 [2024-05-15 12:21:54.552434] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:26.041 [2024-05-15 12:21:54.552479] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.041 [2024-05-15 12:21:54.552662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.041 12:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:26.300 malloc0 00:20:26.300 12:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:26.559 12:21:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAVg2QEXy6 00:20:26.559 [2024-05-15 12:21:55.070069] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:26.559 [2024-05-15 12:21:55.070097] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:26.559 [2024-05-15 12:21:55.070137] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:26.559 request: 00:20:26.559 { 00:20:26.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.559 "host": "nqn.2016-06.io.spdk:host1", 00:20:26.559 "psk": "/tmp/tmp.HAVg2QEXy6", 00:20:26.559 "method": "nvmf_subsystem_add_host", 00:20:26.559 "req_id": 1 00:20:26.559 } 00:20:26.559 Got JSON-RPC error response 00:20:26.559 response: 00:20:26.559 { 00:20:26.559 "code": -32603, 00:20:26.559 "message": "Internal error" 00:20:26.559 } 00:20:26.559 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:26.559 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:26.559 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:26.559 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:26.559 12:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2165623 00:20:26.559 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2165623 ']' 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2165623 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2165623 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2165623' 00:20:26.819 killing process with pid 2165623 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2165623 00:20:26.819 [2024-05-15 12:21:55.141801] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:26.819 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2165623 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.HAVg2QEXy6 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2166041 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2166041 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2166041 ']' 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.078 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:27.079 12:21:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.079 [2024-05-15 12:21:55.408125] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:27.079 [2024-05-15 12:21:55.408178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.079 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.079 [2024-05-15 12:21:55.479861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.079 [2024-05-15 12:21:55.550482] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.079 [2024-05-15 12:21:55.550524] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.079 [2024-05-15 12:21:55.550535] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.079 [2024-05-15 12:21:55.550545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.079 [2024-05-15 12:21:55.550552] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.079 [2024-05-15 12:21:55.550573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.HAVg2QEXy6 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HAVg2QEXy6 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.014 [2024-05-15 12:21:56.397843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.014 12:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:28.272 12:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:28.272 [2024-05-15 12:21:56.742693] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:28.273 [2024-05-15 12:21:56.742757] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.273 [2024-05-15 12:21:56.742935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.273 12:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:28.570 malloc0 00:20:28.570 12:21:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAVg2QEXy6 00:20:28.842 [2024-05-15 12:21:57.220094] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2166334 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2166334 /var/tmp/bdevperf.sock 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2166334 ']' 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:28.842 12:21:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.842 [2024-05-15 12:21:57.269979] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:28.842 [2024-05-15 12:21:57.270028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166334 ] 00:20:28.842 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.842 [2024-05-15 12:21:57.336484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.101 [2024-05-15 12:21:57.411635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.667 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:29.667 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:29.668 12:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAVg2QEXy6 00:20:29.926 [2024-05-15 12:21:58.237416] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.926 [2024-05-15 12:21:58.237501] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:29.926 TLSTESTn1 00:20:29.926 12:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:30.186 12:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:30.186 "subsystems": [ 00:20:30.186 { 00:20:30.186 "subsystem": "keyring", 00:20:30.186 "config": [] 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "subsystem": "iobuf", 00:20:30.186 "config": [ 00:20:30.186 { 00:20:30.186 "method": "iobuf_set_options", 00:20:30.186 "params": { 00:20:30.186 "small_pool_count": 8192, 00:20:30.186 "large_pool_count": 1024, 00:20:30.186 "small_bufsize": 8192, 00:20:30.186 "large_bufsize": 135168 00:20:30.186 } 00:20:30.186 } 00:20:30.186 ] 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "subsystem": "sock", 00:20:30.186 "config": [ 00:20:30.186 { 00:20:30.186 "method": "sock_impl_set_options", 00:20:30.186 "params": { 00:20:30.186 "impl_name": "posix", 00:20:30.186 "recv_buf_size": 2097152, 00:20:30.186 "send_buf_size": 2097152, 00:20:30.186 "enable_recv_pipe": true, 00:20:30.186 "enable_quickack": false, 00:20:30.186 "enable_placement_id": 0, 00:20:30.186 "enable_zerocopy_send_server": true, 00:20:30.186 "enable_zerocopy_send_client": false, 00:20:30.186 "zerocopy_threshold": 0, 00:20:30.186 "tls_version": 0, 00:20:30.186 "enable_ktls": false 00:20:30.186 } 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "method": "sock_impl_set_options", 00:20:30.186 "params": { 00:20:30.186 "impl_name": "ssl", 00:20:30.186 "recv_buf_size": 4096, 00:20:30.186 "send_buf_size": 4096, 00:20:30.186 "enable_recv_pipe": true, 00:20:30.186 "enable_quickack": false, 00:20:30.186 "enable_placement_id": 0, 00:20:30.186 "enable_zerocopy_send_server": true, 00:20:30.186 "enable_zerocopy_send_client": false, 00:20:30.186 "zerocopy_threshold": 0, 00:20:30.186 "tls_version": 0, 00:20:30.186 "enable_ktls": false 00:20:30.186 } 00:20:30.186 } 00:20:30.186 ] 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "subsystem": "vmd", 00:20:30.186 "config": [] 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "subsystem": "accel", 00:20:30.186 "config": [ 00:20:30.186 { 00:20:30.186 "method": "accel_set_options", 00:20:30.186 "params": { 00:20:30.186 "small_cache_size": 128, 00:20:30.186 "large_cache_size": 16, 00:20:30.186 "task_count": 2048, 00:20:30.186 "sequence_count": 2048, 00:20:30.186 "buf_count": 2048 00:20:30.186 } 00:20:30.186 } 00:20:30.186 ] 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "subsystem": "bdev", 00:20:30.186 "config": [ 00:20:30.186 { 00:20:30.186 "method": "bdev_set_options", 00:20:30.186 "params": { 00:20:30.186 "bdev_io_pool_size": 65535, 00:20:30.186 "bdev_io_cache_size": 256, 00:20:30.186 "bdev_auto_examine": true, 00:20:30.186 "iobuf_small_cache_size": 128, 00:20:30.186 "iobuf_large_cache_size": 16 00:20:30.186 } 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "method": "bdev_raid_set_options", 00:20:30.186 "params": { 00:20:30.186 "process_window_size_kb": 1024 00:20:30.186 } 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "method": "bdev_iscsi_set_options", 00:20:30.186 "params": { 00:20:30.186 "timeout_sec": 30 00:20:30.186 } 00:20:30.186 }, 00:20:30.186 { 00:20:30.186 "method": "bdev_nvme_set_options", 00:20:30.186 "params": { 00:20:30.186 "action_on_timeout": "none", 00:20:30.186 "timeout_us": 0, 00:20:30.186 "timeout_admin_us": 0, 00:20:30.186 "keep_alive_timeout_ms": 10000, 00:20:30.186 "arbitration_burst": 0, 00:20:30.186 "low_priority_weight": 0, 00:20:30.186 "medium_priority_weight": 0, 00:20:30.186 "high_priority_weight": 0, 00:20:30.186 "nvme_adminq_poll_period_us": 10000, 00:20:30.186 "nvme_ioq_poll_period_us": 0, 00:20:30.186 "io_queue_requests": 0, 00:20:30.186 "delay_cmd_submit": true, 00:20:30.186 "transport_retry_count": 4, 00:20:30.186 "bdev_retry_count": 3, 00:20:30.186 "transport_ack_timeout": 0, 00:20:30.186 "ctrlr_loss_timeout_sec": 0, 00:20:30.186 "reconnect_delay_sec": 0, 00:20:30.186 "fast_io_fail_timeout_sec": 0, 00:20:30.186 "disable_auto_failback": false, 00:20:30.186 "generate_uuids": false, 00:20:30.186 "transport_tos": 0, 00:20:30.187 "nvme_error_stat": false, 00:20:30.187 "rdma_srq_size": 0, 00:20:30.187 "io_path_stat": false, 00:20:30.187 "allow_accel_sequence": false, 00:20:30.187 "rdma_max_cq_size": 0, 00:20:30.187 "rdma_cm_event_timeout_ms": 0, 00:20:30.187 "dhchap_digests": [ 00:20:30.187 "sha256", 00:20:30.187 "sha384", 00:20:30.187 "sha512" 00:20:30.187 ], 00:20:30.187 "dhchap_dhgroups": [ 00:20:30.187 "null", 00:20:30.187 "ffdhe2048", 00:20:30.187 "ffdhe3072", 00:20:30.187 "ffdhe4096", 00:20:30.187 "ffdhe6144", 00:20:30.187 "ffdhe8192" 00:20:30.187 ] 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "bdev_nvme_set_hotplug", 00:20:30.187 "params": { 00:20:30.187 "period_us": 100000, 00:20:30.187 "enable": false 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "bdev_malloc_create", 00:20:30.187 "params": { 00:20:30.187 "name": "malloc0", 00:20:30.187 "num_blocks": 8192, 00:20:30.187 "block_size": 4096, 00:20:30.187 "physical_block_size": 4096, 00:20:30.187 "uuid": "a5eb7ae0-1941-4cae-b888-1cce5aab5f4e", 00:20:30.187 "optimal_io_boundary": 0 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "bdev_wait_for_examine" 00:20:30.187 } 00:20:30.187 ] 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "subsystem": "nbd", 00:20:30.187 "config": [] 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "subsystem": "scheduler", 00:20:30.187 "config": [ 00:20:30.187 { 00:20:30.187 "method": "framework_set_scheduler", 00:20:30.187 "params": { 00:20:30.187 "name": "static" 00:20:30.187 } 00:20:30.187 } 00:20:30.187 ] 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "subsystem": "nvmf", 00:20:30.187 "config": [ 00:20:30.187 { 00:20:30.187 "method": "nvmf_set_config", 00:20:30.187 "params": { 00:20:30.187 "discovery_filter": "match_any", 00:20:30.187 "admin_cmd_passthru": { 00:20:30.187 "identify_ctrlr": false 00:20:30.187 } 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "nvmf_set_max_subsystems", 00:20:30.187 "params": { 00:20:30.187 "max_subsystems": 1024 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "nvmf_set_crdt", 00:20:30.187 "params": { 00:20:30.187 "crdt1": 0, 00:20:30.187 "crdt2": 0, 00:20:30.187 "crdt3": 0 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "nvmf_create_transport", 00:20:30.187 "params": { 00:20:30.187 "trtype": "TCP", 00:20:30.187 "max_queue_depth": 128, 00:20:30.187 "max_io_qpairs_per_ctrlr": 127, 00:20:30.187 "in_capsule_data_size": 4096, 00:20:30.187 "max_io_size": 131072, 00:20:30.187 "io_unit_size": 131072, 00:20:30.187 "max_aq_depth": 128, 00:20:30.187 "num_shared_buffers": 511, 00:20:30.187 "buf_cache_size": 4294967295, 00:20:30.187 "dif_insert_or_strip": false, 00:20:30.187 "zcopy": false, 00:20:30.187 "c2h_success": false, 00:20:30.187 "sock_priority": 0, 00:20:30.187 "abort_timeout_sec": 1, 00:20:30.187 "ack_timeout": 0, 00:20:30.187 "data_wr_pool_size": 0 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "nvmf_create_subsystem", 00:20:30.187 "params": { 00:20:30.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.187 "allow_any_host": false, 00:20:30.187 "serial_number": "SPDK00000000000001", 00:20:30.187 "model_number": "SPDK bdev Controller", 00:20:30.187 "max_namespaces": 10, 00:20:30.187 "min_cntlid": 1, 00:20:30.187 "max_cntlid": 65519, 00:20:30.187 "ana_reporting": false 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "nvmf_subsystem_add_host", 00:20:30.187 "params": { 00:20:30.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.187 "host": "nqn.2016-06.io.spdk:host1", 00:20:30.187 "psk": "/tmp/tmp.HAVg2QEXy6" 00:20:30.187 } 00:20:30.187 }, 00:20:30.187 { 00:20:30.187 "method": "nvmf_subsystem_add_ns", 00:20:30.187 "params": { 00:20:30.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.187 "namespace": { 00:20:30.187 "nsid": 1, 00:20:30.187 "bdev_name": "malloc0", 00:20:30.187 "nguid": "A5EB7AE019414CAEB8881CCE5AAB5F4E", 00:20:30.187 "uuid": "a5eb7ae0-1941-4cae-b888-1cce5aab5f4e", 00:20:30.187 "no_auto_visible": false 00:20:30.187 } 00:20:30.188 } 00:20:30.188 }, 00:20:30.188 { 00:20:30.188 "method": "nvmf_subsystem_add_listener", 00:20:30.188 "params": { 00:20:30.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.188 "listen_address": { 00:20:30.188 "trtype": "TCP", 00:20:30.188 "adrfam": "IPv4", 00:20:30.188 "traddr": "10.0.0.2", 00:20:30.188 "trsvcid": "4420" 00:20:30.188 }, 00:20:30.188 "secure_channel": true 00:20:30.188 } 00:20:30.188 } 00:20:30.188 ] 00:20:30.188 } 00:20:30.188 ] 00:20:30.188 }' 00:20:30.188 12:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:30.447 12:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:30.447 "subsystems": [ 00:20:30.447 { 00:20:30.447 "subsystem": "keyring", 00:20:30.447 "config": [] 00:20:30.447 }, 00:20:30.447 { 00:20:30.447 "subsystem": "iobuf", 00:20:30.447 "config": [ 00:20:30.447 { 00:20:30.447 "method": "iobuf_set_options", 00:20:30.447 "params": { 00:20:30.447 "small_pool_count": 8192, 00:20:30.447 "large_pool_count": 1024, 00:20:30.447 "small_bufsize": 8192, 00:20:30.447 "large_bufsize": 135168 00:20:30.447 } 00:20:30.447 } 00:20:30.447 ] 00:20:30.447 }, 00:20:30.447 { 00:20:30.447 "subsystem": "sock", 00:20:30.447 "config": [ 00:20:30.447 { 00:20:30.447 "method": "sock_impl_set_options", 00:20:30.447 "params": { 00:20:30.447 "impl_name": "posix", 00:20:30.447 "recv_buf_size": 2097152, 00:20:30.447 "send_buf_size": 2097152, 00:20:30.447 "enable_recv_pipe": true, 00:20:30.447 "enable_quickack": false, 00:20:30.447 "enable_placement_id": 0, 00:20:30.447 "enable_zerocopy_send_server": true, 00:20:30.447 "enable_zerocopy_send_client": false, 00:20:30.447 "zerocopy_threshold": 0, 00:20:30.447 "tls_version": 0, 00:20:30.447 "enable_ktls": false 00:20:30.447 } 00:20:30.447 }, 00:20:30.447 { 00:20:30.447 "method": "sock_impl_set_options", 00:20:30.447 "params": { 00:20:30.447 "impl_name": "ssl", 00:20:30.447 "recv_buf_size": 4096, 00:20:30.447 "send_buf_size": 4096, 00:20:30.447 "enable_recv_pipe": true, 00:20:30.447 "enable_quickack": false, 00:20:30.447 "enable_placement_id": 0, 00:20:30.447 "enable_zerocopy_send_server": true, 00:20:30.447 "enable_zerocopy_send_client": false, 00:20:30.447 "zerocopy_threshold": 0, 00:20:30.447 "tls_version": 0, 00:20:30.447 "enable_ktls": false 00:20:30.447 } 00:20:30.447 } 00:20:30.447 ] 00:20:30.447 }, 00:20:30.447 { 00:20:30.447 "subsystem": "vmd", 00:20:30.447 "config": [] 00:20:30.447 }, 00:20:30.447 { 00:20:30.447 "subsystem": "accel", 00:20:30.447 "config": [ 00:20:30.447 { 00:20:30.447 "method": "accel_set_options", 00:20:30.447 "params": { 00:20:30.447 "small_cache_size": 128, 00:20:30.447 "large_cache_size": 16, 00:20:30.447 "task_count": 2048, 00:20:30.447 "sequence_count": 2048, 00:20:30.447 "buf_count": 2048 00:20:30.448 } 00:20:30.448 } 00:20:30.448 ] 00:20:30.448 }, 00:20:30.448 { 00:20:30.448 "subsystem": "bdev", 00:20:30.448 "config": [ 00:20:30.448 { 00:20:30.448 "method": "bdev_set_options", 00:20:30.448 "params": { 00:20:30.448 "bdev_io_pool_size": 65535, 00:20:30.448 "bdev_io_cache_size": 256, 00:20:30.448 "bdev_auto_examine": true, 00:20:30.448 "iobuf_small_cache_size": 128, 00:20:30.448 "iobuf_large_cache_size": 16 00:20:30.448 } 00:20:30.448 }, 00:20:30.448 { 00:20:30.448 "method": "bdev_raid_set_options", 00:20:30.448 "params": { 00:20:30.448 "process_window_size_kb": 1024 00:20:30.448 } 00:20:30.448 }, 00:20:30.448 { 00:20:30.448 "method": "bdev_iscsi_set_options", 00:20:30.448 "params": { 00:20:30.448 "timeout_sec": 30 00:20:30.448 } 00:20:30.448 }, 00:20:30.448 { 00:20:30.448 "method": "bdev_nvme_set_options", 00:20:30.448 "params": { 00:20:30.448 "action_on_timeout": "none", 00:20:30.448 "timeout_us": 0, 00:20:30.448 "timeout_admin_us": 0, 00:20:30.448 "keep_alive_timeout_ms": 10000, 00:20:30.448 "arbitration_burst": 0, 00:20:30.448 "low_priority_weight": 0, 00:20:30.448 "medium_priority_weight": 0, 00:20:30.448 "high_priority_weight": 0, 00:20:30.448 "nvme_adminq_poll_period_us": 10000, 00:20:30.448 "nvme_ioq_poll_period_us": 0, 00:20:30.448 "io_queue_requests": 512, 00:20:30.448 "delay_cmd_submit": true, 00:20:30.448 "transport_retry_count": 4, 00:20:30.448 "bdev_retry_count": 3, 00:20:30.448 "transport_ack_timeout": 0, 00:20:30.448 "ctrlr_loss_timeout_sec": 0, 00:20:30.448 "reconnect_delay_sec": 0, 00:20:30.448 "fast_io_fail_timeout_sec": 0, 00:20:30.448 "disable_auto_failback": false, 00:20:30.448 "generate_uuids": false, 00:20:30.448 "transport_tos": 0, 00:20:30.448 "nvme_error_stat": false, 00:20:30.448 "rdma_srq_size": 0, 00:20:30.448 "io_path_stat": false, 00:20:30.448 "allow_accel_sequence": false, 00:20:30.448 "rdma_max_cq_size": 0, 00:20:30.448 "rdma_cm_event_timeout_ms": 0, 00:20:30.448 "dhchap_digests": [ 00:20:30.448 "sha256", 00:20:30.448 "sha384", 00:20:30.448 "sha512" 00:20:30.448 ], 00:20:30.448 "dhchap_dhgroups": [ 00:20:30.448 "null", 00:20:30.448 "ffdhe2048", 00:20:30.448 "ffdhe3072", 00:20:30.448 "ffdhe4096", 00:20:30.448 "ffdhe6144", 00:20:30.448 "ffdhe8192" 00:20:30.448 ] 00:20:30.448 } 00:20:30.448 }, 00:20:30.448 { 00:20:30.448 "method": "bdev_nvme_attach_controller", 00:20:30.448 "params": { 00:20:30.448 "name": "TLSTEST", 00:20:30.448 "trtype": "TCP", 00:20:30.448 "adrfam": "IPv4", 00:20:30.448 "traddr": "10.0.0.2", 00:20:30.448 "trsvcid": "4420", 00:20:30.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.448 "prchk_reftag": false, 00:20:30.448 "prchk_guard": false, 00:20:30.448 "ctrlr_loss_timeout_sec": 0, 00:20:30.448 "reconnect_delay_sec": 0, 00:20:30.448 "fast_io_fail_timeout_sec": 0, 00:20:30.448 "psk": "/tmp/tmp.HAVg2QEXy6", 00:20:30.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.448 "hdgst": false, 00:20:30.448 "ddgst": false 00:20:30.448 } 00:20:30.448 }, 00:20:30.448 { 00:20:30.448 "method": "bdev_nvme_set_hotplug", 00:20:30.448 "params": { 00:20:30.448 "period_us": 100000, 00:20:30.448 "enable": false 00:20:30.448 } 00:20:30.448 }, 00:20:30.448 { 00:20:30.448 "method": "bdev_wait_for_examine" 00:20:30.448 } 00:20:30.448 ] 00:20:30.448 }, 00:20:30.448 { 00:20:30.448 "subsystem": "nbd", 00:20:30.448 "config": [] 00:20:30.448 } 00:20:30.448 ] 00:20:30.448 }' 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2166334 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2166334 ']' 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2166334 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2166334 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2166334' 00:20:30.448 killing process with pid 2166334 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2166334 00:20:30.448 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.448 00:20:30.448 Latency(us) 00:20:30.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.448 =================================================================================================================== 00:20:30.448 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.448 [2024-05-15 12:21:58.876296] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:30.448 12:21:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2166334 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2166041 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2166041 ']' 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2166041 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2166041 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2166041' 00:20:30.708 killing process with pid 2166041 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2166041 00:20:30.708 [2024-05-15 12:21:59.131513] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:30.708 [2024-05-15 12:21:59.131550] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:30.708 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2166041 00:20:30.968 12:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:30.968 12:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.968 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:30.968 12:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:30.968 "subsystems": [ 00:20:30.968 { 00:20:30.968 "subsystem": "keyring", 00:20:30.968 "config": [] 00:20:30.968 }, 00:20:30.968 { 00:20:30.968 "subsystem": "iobuf", 00:20:30.968 "config": [ 00:20:30.968 { 00:20:30.968 "method": "iobuf_set_options", 00:20:30.968 "params": { 00:20:30.968 "small_pool_count": 8192, 00:20:30.968 "large_pool_count": 1024, 00:20:30.968 "small_bufsize": 8192, 00:20:30.968 "large_bufsize": 135168 00:20:30.968 } 00:20:30.968 } 00:20:30.968 ] 00:20:30.968 }, 00:20:30.968 { 00:20:30.968 "subsystem": "sock", 00:20:30.968 "config": [ 00:20:30.968 { 00:20:30.968 "method": "sock_impl_set_options", 00:20:30.968 "params": { 00:20:30.968 "impl_name": "posix", 00:20:30.968 "recv_buf_size": 2097152, 00:20:30.968 "send_buf_size": 2097152, 00:20:30.968 "enable_recv_pipe": true, 00:20:30.968 "enable_quickack": false, 00:20:30.968 "enable_placement_id": 0, 00:20:30.968 "enable_zerocopy_send_server": true, 00:20:30.968 "enable_zerocopy_send_client": false, 00:20:30.968 "zerocopy_threshold": 0, 00:20:30.968 "tls_version": 0, 00:20:30.968 "enable_ktls": false 00:20:30.968 } 00:20:30.968 }, 00:20:30.968 { 00:20:30.968 "method": "sock_impl_set_options", 00:20:30.968 "params": { 00:20:30.968 "impl_name": "ssl", 00:20:30.968 "recv_buf_size": 4096, 00:20:30.968 "send_buf_size": 4096, 00:20:30.968 "enable_recv_pipe": true, 00:20:30.968 "enable_quickack": false, 00:20:30.968 "enable_placement_id": 0, 00:20:30.968 "enable_zerocopy_send_server": true, 00:20:30.968 "enable_zerocopy_send_client": false, 00:20:30.968 "zerocopy_threshold": 0, 00:20:30.968 "tls_version": 0, 00:20:30.969 "enable_ktls": false 00:20:30.969 } 00:20:30.969 } 00:20:30.969 ] 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "subsystem": "vmd", 00:20:30.969 "config": [] 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "subsystem": "accel", 00:20:30.969 "config": [ 00:20:30.969 { 00:20:30.969 "method": "accel_set_options", 00:20:30.969 "params": { 00:20:30.969 "small_cache_size": 128, 00:20:30.969 "large_cache_size": 16, 00:20:30.969 "task_count": 2048, 00:20:30.969 "sequence_count": 2048, 00:20:30.969 "buf_count": 2048 00:20:30.969 } 00:20:30.969 } 00:20:30.969 ] 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "subsystem": "bdev", 00:20:30.969 "config": [ 00:20:30.969 { 00:20:30.969 "method": "bdev_set_options", 00:20:30.969 "params": { 00:20:30.969 "bdev_io_pool_size": 65535, 00:20:30.969 "bdev_io_cache_size": 256, 00:20:30.969 "bdev_auto_examine": true, 00:20:30.969 "iobuf_small_cache_size": 128, 00:20:30.969 "iobuf_large_cache_size": 16 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "bdev_raid_set_options", 00:20:30.969 "params": { 00:20:30.969 "process_window_size_kb": 1024 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "bdev_iscsi_set_options", 00:20:30.969 "params": { 00:20:30.969 "timeout_sec": 30 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "bdev_nvme_set_options", 00:20:30.969 "params": { 00:20:30.969 "action_on_timeout": "none", 00:20:30.969 "timeout_us": 0, 00:20:30.969 "timeout_admin_us": 0, 00:20:30.969 "keep_alive_timeout_ms": 10000, 00:20:30.969 "arbitration_burst": 0, 00:20:30.969 "low_priority_weight": 0, 00:20:30.969 "medium_priority_weight": 0, 00:20:30.969 "high_priority_weight": 0, 00:20:30.969 "nvme_adminq_poll_period_us": 10000, 00:20:30.969 "nvme_ioq_poll_period_us": 0, 00:20:30.969 "io_queue_requests": 0, 00:20:30.969 "delay_cmd_submit": true, 00:20:30.969 "transport_retry_count": 4, 00:20:30.969 "bdev_retry_count": 3, 00:20:30.969 "transport_ack_timeout": 0, 00:20:30.969 "ctrlr_loss_timeout_sec": 0, 00:20:30.969 "reconnect_delay_sec": 0, 00:20:30.969 "fast_io_fail_timeout_sec": 0, 00:20:30.969 "disable_auto_failback": false, 00:20:30.969 "generate_uuids": false, 00:20:30.969 "transport_tos": 0, 00:20:30.969 "nvme_error_stat": false, 00:20:30.969 "rdma_srq_size": 0, 00:20:30.969 "io_path_stat": false, 00:20:30.969 "allow_accel_sequence": false, 00:20:30.969 "rdma_max_cq_size": 0, 00:20:30.969 "rdma_cm_event_timeout_ms": 0, 00:20:30.969 "dhchap_digests": [ 00:20:30.969 "sha256", 00:20:30.969 "sha384", 00:20:30.969 "sha512" 00:20:30.969 ], 00:20:30.969 "dhchap_dhgroups": [ 00:20:30.969 "null", 00:20:30.969 "ffdhe2048", 00:20:30.969 "ffdhe3072", 00:20:30.969 "ffdhe4096", 00:20:30.969 "ffdhe6144", 00:20:30.969 "ffdhe8192" 00:20:30.969 ] 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "bdev_nvme_set_hotplug", 00:20:30.969 "params": { 00:20:30.969 "period_us": 100000, 00:20:30.969 "enable": false 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "bdev_malloc_create", 00:20:30.969 "params": { 00:20:30.969 "name": "malloc0", 00:20:30.969 "num_blocks": 8192, 00:20:30.969 "block_size": 4096, 00:20:30.969 "physical_block_size": 4096, 00:20:30.969 "uuid": "a5eb7ae0-1941-4cae-b888-1cce5aab5f4e", 00:20:30.969 "optimal_io_boundary": 0 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "bdev_wait_for_examine" 00:20:30.969 } 00:20:30.969 ] 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "subsystem": "nbd", 00:20:30.969 "config": [] 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "subsystem": "scheduler", 00:20:30.969 "config": [ 00:20:30.969 { 00:20:30.969 "method": "framework_set_scheduler", 00:20:30.969 "params": { 00:20:30.969 "name": "static" 00:20:30.969 } 00:20:30.969 } 00:20:30.969 ] 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "subsystem": "nvmf", 00:20:30.969 "config": [ 00:20:30.969 { 00:20:30.969 "method": "nvmf_set_config", 00:20:30.969 "params": { 00:20:30.969 "discovery_filter": "match_any", 00:20:30.969 "admin_cmd_passthru": { 00:20:30.969 "identify_ctrlr": false 00:20:30.969 } 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "nvmf_set_max_subsystems", 00:20:30.969 "params": { 00:20:30.969 "max_subsystems": 1024 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "nvmf_set_crdt", 00:20:30.969 "params": { 00:20:30.969 "crdt1": 0, 00:20:30.969 "crdt2": 0, 00:20:30.969 "crdt3": 0 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "nvmf_create_transport", 00:20:30.969 "params": { 00:20:30.969 "trtype": "TCP", 00:20:30.969 "max_queue_depth": 128, 00:20:30.969 "max_io_qpairs_per_ctrlr": 127, 00:20:30.969 "in_capsule_data_size": 4096, 00:20:30.969 "max_io_size": 131072, 00:20:30.969 "io_unit_size": 131072, 00:20:30.969 "max_aq_depth": 128, 00:20:30.969 "num_shared_buffers": 511, 00:20:30.969 "buf_cache_size": 4294967295, 00:20:30.969 "dif_insert_or_strip": false, 00:20:30.969 "zcopy": false, 00:20:30.969 "c2h_success": false, 00:20:30.969 "sock_priority": 0, 00:20:30.969 "abort_timeout_sec": 1, 00:20:30.969 "ack_timeout": 0, 00:20:30.969 "data_wr_pool_size": 0 00:20:30.969 } 00:20:30.969 }, 00:20:30.969 { 00:20:30.969 "method": "nvmf_create_subsystem", 00:20:30.969 "params": { 00:20:30.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.969 "allow_any_host": false, 00:20:30.970 "serial_number": "SPDK00000000000001", 00:20:30.970 "model_number": "SPDK bdev Controller", 00:20:30.970 "max_namespaces": 10, 00:20:30.970 "min_cntlid": 1, 00:20:30.970 "max_cntlid": 65519, 00:20:30.970 "ana_reporting": false 00:20:30.970 } 00:20:30.970 }, 00:20:30.970 { 00:20:30.970 "method": "nvmf_subsystem_add_host", 00:20:30.970 "params": { 00:20:30.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.970 "host": "nqn.2016-06.io.spdk:host1", 00:20:30.970 "psk": "/tmp/tmp.HAVg2QEXy6" 00:20:30.970 } 00:20:30.970 }, 00:20:30.970 { 00:20:30.970 "method": "nvmf_subsystem_add_ns", 00:20:30.970 "params": { 00:20:30.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.970 "namespace": { 00:20:30.970 "nsid": 1, 00:20:30.970 "bdev_name": "malloc0", 00:20:30.970 "nguid": "A5EB7AE019414CAEB8881CCE5AAB5F4E", 00:20:30.970 "uuid": "a5eb7ae0-1941-4cae-b888-1cce5aab5f4e", 00:20:30.970 "no_auto_visible": false 00:20:30.970 } 00:20:30.970 } 00:20:30.970 }, 00:20:30.970 { 00:20:30.970 "method": "nvmf_subsystem_add_listener", 00:20:30.970 "params": { 00:20:30.970 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.970 "listen_address": { 00:20:30.970 "trtype": "TCP", 00:20:30.970 "adrfam": "IPv4", 00:20:30.970 "traddr": "10.0.0.2", 00:20:30.970 "trsvcid": "4420" 00:20:30.970 }, 00:20:30.970 "secure_channel": true 00:20:30.970 } 00:20:30.970 } 00:20:30.970 ] 00:20:30.970 } 00:20:30.970 ] 00:20:30.970 }' 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2166734 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2166734 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2166734 ']' 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:30.970 12:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.970 [2024-05-15 12:21:59.397402] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:30.970 [2024-05-15 12:21:59.397451] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.970 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.970 [2024-05-15 12:21:59.471594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.229 [2024-05-15 12:21:59.545361] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.229 [2024-05-15 12:21:59.545398] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.229 [2024-05-15 12:21:59.545412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.229 [2024-05-15 12:21:59.545421] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.229 [2024-05-15 12:21:59.545428] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.229 [2024-05-15 12:21:59.545484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.229 [2024-05-15 12:21:59.740421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.229 [2024-05-15 12:21:59.756413] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:31.488 [2024-05-15 12:21:59.772428] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:31.488 [2024-05-15 12:21:59.772471] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.488 [2024-05-15 12:21:59.783563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2166905 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2166905 /var/tmp/bdevperf.sock 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2166905 ']' 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:31.747 12:22:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:31.747 "subsystems": [ 00:20:31.747 { 00:20:31.747 "subsystem": "keyring", 00:20:31.747 "config": [] 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "subsystem": "iobuf", 00:20:31.747 "config": [ 00:20:31.747 { 00:20:31.747 "method": "iobuf_set_options", 00:20:31.747 "params": { 00:20:31.747 "small_pool_count": 8192, 00:20:31.747 "large_pool_count": 1024, 00:20:31.747 "small_bufsize": 8192, 00:20:31.747 "large_bufsize": 135168 00:20:31.747 } 00:20:31.747 } 00:20:31.747 ] 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "subsystem": "sock", 00:20:31.747 "config": [ 00:20:31.747 { 00:20:31.747 "method": "sock_impl_set_options", 00:20:31.747 "params": { 00:20:31.747 "impl_name": "posix", 00:20:31.747 "recv_buf_size": 2097152, 00:20:31.747 "send_buf_size": 2097152, 00:20:31.747 "enable_recv_pipe": true, 00:20:31.747 "enable_quickack": false, 00:20:31.747 "enable_placement_id": 0, 00:20:31.747 "enable_zerocopy_send_server": true, 00:20:31.747 "enable_zerocopy_send_client": false, 00:20:31.747 "zerocopy_threshold": 0, 00:20:31.747 "tls_version": 0, 00:20:31.747 "enable_ktls": false 00:20:31.747 } 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "method": "sock_impl_set_options", 00:20:31.747 "params": { 00:20:31.747 "impl_name": "ssl", 00:20:31.747 "recv_buf_size": 4096, 00:20:31.747 "send_buf_size": 4096, 00:20:31.747 "enable_recv_pipe": true, 00:20:31.747 "enable_quickack": false, 00:20:31.747 "enable_placement_id": 0, 00:20:31.747 "enable_zerocopy_send_server": true, 00:20:31.747 "enable_zerocopy_send_client": false, 00:20:31.747 "zerocopy_threshold": 0, 00:20:31.747 "tls_version": 0, 00:20:31.747 "enable_ktls": false 00:20:31.747 } 00:20:31.747 } 00:20:31.747 ] 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "subsystem": "vmd", 00:20:31.747 "config": [] 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "subsystem": "accel", 00:20:31.747 "config": [ 00:20:31.747 { 00:20:31.747 "method": "accel_set_options", 00:20:31.747 "params": { 00:20:31.747 "small_cache_size": 128, 00:20:31.747 "large_cache_size": 16, 00:20:31.747 "task_count": 2048, 00:20:31.747 "sequence_count": 2048, 00:20:31.747 "buf_count": 2048 00:20:31.747 } 00:20:31.747 } 00:20:31.747 ] 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "subsystem": "bdev", 00:20:31.747 "config": [ 00:20:31.747 { 00:20:31.747 "method": "bdev_set_options", 00:20:31.747 "params": { 00:20:31.747 "bdev_io_pool_size": 65535, 00:20:31.747 "bdev_io_cache_size": 256, 00:20:31.747 "bdev_auto_examine": true, 00:20:31.747 "iobuf_small_cache_size": 128, 00:20:31.747 "iobuf_large_cache_size": 16 00:20:31.747 } 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "method": "bdev_raid_set_options", 00:20:31.747 "params": { 00:20:31.747 "process_window_size_kb": 1024 00:20:31.747 } 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "method": "bdev_iscsi_set_options", 00:20:31.747 "params": { 00:20:31.747 "timeout_sec": 30 00:20:31.747 } 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "method": "bdev_nvme_set_options", 00:20:31.747 "params": { 00:20:31.747 "action_on_timeout": "none", 00:20:31.747 "timeout_us": 0, 00:20:31.747 "timeout_admin_us": 0, 00:20:31.747 "keep_alive_timeout_ms": 10000, 00:20:31.747 "arbitration_burst": 0, 00:20:31.747 "low_priority_weight": 0, 00:20:31.747 "medium_priority_weight": 0, 00:20:31.747 "high_priority_weight": 0, 00:20:31.747 "nvme_adminq_poll_period_us": 10000, 00:20:31.747 "nvme_ioq_poll_period_us": 0, 00:20:31.747 "io_queue_requests": 512, 00:20:31.748 "delay_cmd_submit": true, 00:20:31.748 "transport_retry_count": 4, 00:20:31.748 "bdev_retry_count": 3, 00:20:31.748 "transport_ack_timeout": 0, 00:20:31.748 "ctrlr_loss_timeout_sec": 0, 00:20:31.748 "reconnect_delay_sec": 0, 00:20:31.748 "fast_io_fail_timeout_sec": 0, 00:20:31.748 "disable_auto_failback": false, 00:20:31.748 "generate_uuids": false, 00:20:31.748 "transport_tos": 0, 00:20:31.748 "nvme_error_stat": false, 00:20:31.748 "rdma_srq_size": 0, 00:20:31.748 "io_path_stat": false, 00:20:31.748 "allow_accel_sequence": false, 00:20:31.748 "rdma_max_cq_size": 0, 00:20:31.748 "rdma_cm_event_timeout_ms": 0, 00:20:31.748 "dhchap_digests": [ 00:20:31.748 "sha256", 00:20:31.748 "sha384", 00:20:31.748 "sha512" 00:20:31.748 ], 00:20:31.748 "dhchap_dhgroups": [ 00:20:31.748 "null", 00:20:31.748 "ffdhe2048", 00:20:31.748 "ffdhe3072", 00:20:31.748 "ffdhe4096", 00:20:31.748 "ffdhe6144", 00:20:31.748 "ffdhe8192" 00:20:31.748 ] 00:20:31.748 } 00:20:31.748 }, 00:20:31.748 { 00:20:31.748 "method": "bdev_nvme_attach_controller", 00:20:31.748 "params": { 00:20:31.748 "name": "TLSTEST", 00:20:31.748 "trtype": "TCP", 00:20:31.748 "adrfam": "IPv4", 00:20:31.748 "traddr": "10.0.0.2", 00:20:31.748 "trsvcid": "4420", 00:20:31.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.748 "prchk_reftag": false, 00:20:31.748 "prchk_guard": false, 00:20:31.748 "ctrlr_loss_timeout_sec": 0, 00:20:31.748 "reconnect_delay_sec": 0, 00:20:31.748 "fast_io_fail_timeout_sec": 0, 00:20:31.748 "psk": "/tmp/tmp.HAVg2QEXy6", 00:20:31.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.748 "hdgst": false, 00:20:31.748 "ddgst": false 00:20:31.748 } 00:20:31.748 }, 00:20:31.748 { 00:20:31.748 "method": "bdev_nvme_set_hotplug", 00:20:31.748 "params": { 00:20:31.748 "period_us": 100000, 00:20:31.748 "enable": false 00:20:31.748 } 00:20:31.748 }, 00:20:31.748 { 00:20:31.748 "method": "bdev_wait_for_examine" 00:20:31.748 } 00:20:31.748 ] 00:20:31.748 }, 00:20:31.748 { 00:20:31.748 "subsystem": "nbd", 00:20:31.748 "config": [] 00:20:31.748 } 00:20:31.748 ] 00:20:31.748 }' 00:20:31.748 12:22:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.006 [2024-05-15 12:22:00.299320] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:32.006 [2024-05-15 12:22:00.299372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166905 ] 00:20:32.006 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.006 [2024-05-15 12:22:00.365203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.006 [2024-05-15 12:22:00.435776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.265 [2024-05-15 12:22:00.571224] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.265 [2024-05-15 12:22:00.571320] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:32.832 12:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:32.832 12:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:32.832 12:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:32.832 Running I/O for 10 seconds... 00:20:42.805 00:20:42.805 Latency(us) 00:20:42.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.805 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:42.805 Verification LBA range: start 0x0 length 0x2000 00:20:42.805 TLSTESTn1 : 10.06 1883.54 7.36 0.00 0.00 67772.96 6868.17 110729.63 00:20:42.805 =================================================================================================================== 00:20:42.805 Total : 1883.54 7.36 0.00 0.00 67772.96 6868.17 110729.63 00:20:42.805 0 00:20:42.805 12:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:42.805 12:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2166905 00:20:42.805 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2166905 ']' 00:20:42.805 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2166905 00:20:42.805 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:42.805 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:42.805 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2166905 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2166905' 00:20:43.063 killing process with pid 2166905 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2166905 00:20:43.063 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.063 00:20:43.063 Latency(us) 00:20:43.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.063 =================================================================================================================== 00:20:43.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.063 [2024-05-15 12:22:11.346492] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2166905 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2166734 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2166734 ']' 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2166734 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:43.063 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2166734 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2166734' 00:20:43.323 killing process with pid 2166734 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2166734 00:20:43.323 [2024-05-15 12:22:11.595243] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:43.323 [2024-05-15 12:22:11.595284] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2166734 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2168812 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2168812 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2168812 ']' 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:43.323 12:22:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.582 [2024-05-15 12:22:11.862461] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:43.582 [2024-05-15 12:22:11.862513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.582 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.582 [2024-05-15 12:22:11.935069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.582 [2024-05-15 12:22:12.002555] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.582 [2024-05-15 12:22:12.002596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.582 [2024-05-15 12:22:12.002605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.582 [2024-05-15 12:22:12.002614] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.582 [2024-05-15 12:22:12.002636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.582 [2024-05-15 12:22:12.002657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.150 12:22:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:44.150 12:22:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:44.150 12:22:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.150 12:22:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:44.150 12:22:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.408 12:22:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.408 12:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.HAVg2QEXy6 00:20:44.408 12:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.HAVg2QEXy6 00:20:44.408 12:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:44.408 [2024-05-15 12:22:12.846174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.408 12:22:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:44.667 12:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:44.667 [2024-05-15 12:22:13.174985] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:44.667 [2024-05-15 12:22:13.175028] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.667 [2024-05-15 12:22:13.175223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.667 12:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:44.926 malloc0 00:20:44.926 12:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HAVg2QEXy6 00:20:45.184 [2024-05-15 12:22:13.688657] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2169272 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2169272 /var/tmp/bdevperf.sock 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2169272 ']' 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:45.184 12:22:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.443 [2024-05-15 12:22:13.755258] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:45.443 [2024-05-15 12:22:13.755313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169272 ] 00:20:45.443 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.443 [2024-05-15 12:22:13.825817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.443 [2024-05-15 12:22:13.900571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.380 12:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:46.380 12:22:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:46.380 12:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HAVg2QEXy6 00:20:46.380 12:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:46.380 [2024-05-15 12:22:14.871860] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.639 nvme0n1 00:20:46.639 12:22:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:46.639 Running I/O for 1 seconds... 00:20:48.016 00:20:48.016 Latency(us) 00:20:48.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.016 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:48.016 Verification LBA range: start 0x0 length 0x2000 00:20:48.016 nvme0n1 : 1.08 1626.44 6.35 0.00 0.00 76539.71 7287.60 103599.31 00:20:48.016 =================================================================================================================== 00:20:48.016 Total : 1626.44 6.35 0.00 0.00 76539.71 7287.60 103599.31 00:20:48.016 0 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2169272 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2169272 ']' 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2169272 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2169272 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2169272' 00:20:48.016 killing process with pid 2169272 00:20:48.016 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2169272 00:20:48.016 Received shutdown signal, test time was about 1.000000 seconds 00:20:48.016 00:20:48.016 Latency(us) 00:20:48.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.017 =================================================================================================================== 00:20:48.017 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2169272 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2168812 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2168812 ']' 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2168812 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2168812 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2168812' 00:20:48.017 killing process with pid 2168812 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2168812 00:20:48.017 [2024-05-15 12:22:16.445119] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:48.017 [2024-05-15 12:22:16.445164] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:48.017 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2168812 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2169731 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2169731 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2169731 ']' 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:48.320 12:22:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.320 [2024-05-15 12:22:16.714419] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:48.320 [2024-05-15 12:22:16.714471] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.320 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.320 [2024-05-15 12:22:16.788628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.588 [2024-05-15 12:22:16.863063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.588 [2024-05-15 12:22:16.863101] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.588 [2024-05-15 12:22:16.863110] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.588 [2024-05-15 12:22:16.863119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.588 [2024-05-15 12:22:16.863126] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.588 [2024-05-15 12:22:16.863147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.155 [2024-05-15 12:22:17.558384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.155 malloc0 00:20:49.155 [2024-05-15 12:22:17.586926] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:49.155 [2024-05-15 12:22:17.586996] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.155 [2024-05-15 12:22:17.587189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2169902 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2169902 /var/tmp/bdevperf.sock 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2169902 ']' 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:49.155 12:22:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.155 [2024-05-15 12:22:17.659855] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:49.155 [2024-05-15 12:22:17.659901] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169902 ] 00:20:49.413 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.413 [2024-05-15 12:22:17.727812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.413 [2024-05-15 12:22:17.804691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.979 12:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:49.979 12:22:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:49.979 12:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HAVg2QEXy6 00:20:50.237 12:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:50.494 [2024-05-15 12:22:18.776185] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.495 nvme0n1 00:20:50.495 12:22:18 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:50.495 Running I/O for 1 seconds... 00:20:51.870 00:20:51.870 Latency(us) 00:20:51.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.870 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:51.870 Verification LBA range: start 0x0 length 0x2000 00:20:51.870 nvme0n1 : 1.05 1589.07 6.21 0.00 0.00 78884.54 5164.24 103599.31 00:20:51.870 =================================================================================================================== 00:20:51.870 Total : 1589.07 6.21 0.00 0.00 78884.54 5164.24 103599.31 00:20:51.870 0 00:20:51.870 12:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:51.870 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.870 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.870 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.870 12:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:51.870 "subsystems": [ 00:20:51.870 { 00:20:51.870 "subsystem": "keyring", 00:20:51.870 "config": [ 00:20:51.870 { 00:20:51.870 "method": "keyring_file_add_key", 00:20:51.870 "params": { 00:20:51.870 "name": "key0", 00:20:51.870 "path": "/tmp/tmp.HAVg2QEXy6" 00:20:51.870 } 00:20:51.870 } 00:20:51.870 ] 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "subsystem": "iobuf", 00:20:51.870 "config": [ 00:20:51.870 { 00:20:51.870 "method": "iobuf_set_options", 00:20:51.870 "params": { 00:20:51.870 "small_pool_count": 8192, 00:20:51.870 "large_pool_count": 1024, 00:20:51.870 "small_bufsize": 8192, 00:20:51.870 "large_bufsize": 135168 00:20:51.870 } 00:20:51.870 } 00:20:51.870 ] 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "subsystem": "sock", 00:20:51.870 "config": [ 00:20:51.870 { 00:20:51.870 "method": "sock_impl_set_options", 00:20:51.870 "params": { 00:20:51.870 "impl_name": "posix", 00:20:51.870 "recv_buf_size": 2097152, 00:20:51.870 "send_buf_size": 2097152, 00:20:51.870 "enable_recv_pipe": true, 00:20:51.870 "enable_quickack": false, 00:20:51.870 "enable_placement_id": 0, 00:20:51.870 "enable_zerocopy_send_server": true, 00:20:51.870 "enable_zerocopy_send_client": false, 00:20:51.870 "zerocopy_threshold": 0, 00:20:51.870 "tls_version": 0, 00:20:51.870 "enable_ktls": false 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "sock_impl_set_options", 00:20:51.870 "params": { 00:20:51.870 "impl_name": "ssl", 00:20:51.870 "recv_buf_size": 4096, 00:20:51.870 "send_buf_size": 4096, 00:20:51.870 "enable_recv_pipe": true, 00:20:51.870 "enable_quickack": false, 00:20:51.870 "enable_placement_id": 0, 00:20:51.870 "enable_zerocopy_send_server": true, 00:20:51.870 "enable_zerocopy_send_client": false, 00:20:51.870 "zerocopy_threshold": 0, 00:20:51.870 "tls_version": 0, 00:20:51.870 "enable_ktls": false 00:20:51.870 } 00:20:51.870 } 00:20:51.870 ] 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "subsystem": "vmd", 00:20:51.870 "config": [] 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "subsystem": "accel", 00:20:51.870 "config": [ 00:20:51.870 { 00:20:51.870 "method": "accel_set_options", 00:20:51.870 "params": { 00:20:51.870 "small_cache_size": 128, 00:20:51.870 "large_cache_size": 16, 00:20:51.870 "task_count": 2048, 00:20:51.870 "sequence_count": 2048, 00:20:51.870 "buf_count": 2048 00:20:51.870 } 00:20:51.870 } 00:20:51.870 ] 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "subsystem": "bdev", 00:20:51.870 "config": [ 00:20:51.870 { 00:20:51.870 "method": "bdev_set_options", 00:20:51.870 "params": { 00:20:51.870 "bdev_io_pool_size": 65535, 00:20:51.870 "bdev_io_cache_size": 256, 00:20:51.870 "bdev_auto_examine": true, 00:20:51.870 "iobuf_small_cache_size": 128, 00:20:51.870 "iobuf_large_cache_size": 16 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "bdev_raid_set_options", 00:20:51.870 "params": { 00:20:51.870 "process_window_size_kb": 1024 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "bdev_iscsi_set_options", 00:20:51.870 "params": { 00:20:51.870 "timeout_sec": 30 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "bdev_nvme_set_options", 00:20:51.870 "params": { 00:20:51.870 "action_on_timeout": "none", 00:20:51.870 "timeout_us": 0, 00:20:51.870 "timeout_admin_us": 0, 00:20:51.870 "keep_alive_timeout_ms": 10000, 00:20:51.870 "arbitration_burst": 0, 00:20:51.870 "low_priority_weight": 0, 00:20:51.870 "medium_priority_weight": 0, 00:20:51.870 "high_priority_weight": 0, 00:20:51.870 "nvme_adminq_poll_period_us": 10000, 00:20:51.870 "nvme_ioq_poll_period_us": 0, 00:20:51.870 "io_queue_requests": 0, 00:20:51.870 "delay_cmd_submit": true, 00:20:51.870 "transport_retry_count": 4, 00:20:51.870 "bdev_retry_count": 3, 00:20:51.870 "transport_ack_timeout": 0, 00:20:51.870 "ctrlr_loss_timeout_sec": 0, 00:20:51.870 "reconnect_delay_sec": 0, 00:20:51.870 "fast_io_fail_timeout_sec": 0, 00:20:51.870 "disable_auto_failback": false, 00:20:51.870 "generate_uuids": false, 00:20:51.870 "transport_tos": 0, 00:20:51.870 "nvme_error_stat": false, 00:20:51.870 "rdma_srq_size": 0, 00:20:51.870 "io_path_stat": false, 00:20:51.870 "allow_accel_sequence": false, 00:20:51.870 "rdma_max_cq_size": 0, 00:20:51.870 "rdma_cm_event_timeout_ms": 0, 00:20:51.870 "dhchap_digests": [ 00:20:51.870 "sha256", 00:20:51.870 "sha384", 00:20:51.870 "sha512" 00:20:51.870 ], 00:20:51.870 "dhchap_dhgroups": [ 00:20:51.870 "null", 00:20:51.870 "ffdhe2048", 00:20:51.870 "ffdhe3072", 00:20:51.870 "ffdhe4096", 00:20:51.870 "ffdhe6144", 00:20:51.870 "ffdhe8192" 00:20:51.870 ] 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "bdev_nvme_set_hotplug", 00:20:51.870 "params": { 00:20:51.870 "period_us": 100000, 00:20:51.870 "enable": false 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "bdev_malloc_create", 00:20:51.870 "params": { 00:20:51.870 "name": "malloc0", 00:20:51.870 "num_blocks": 8192, 00:20:51.870 "block_size": 4096, 00:20:51.870 "physical_block_size": 4096, 00:20:51.870 "uuid": "ac368f3a-07f7-4f8e-a374-1ded0402b3d3", 00:20:51.870 "optimal_io_boundary": 0 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "bdev_wait_for_examine" 00:20:51.870 } 00:20:51.870 ] 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "subsystem": "nbd", 00:20:51.870 "config": [] 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "subsystem": "scheduler", 00:20:51.870 "config": [ 00:20:51.870 { 00:20:51.870 "method": "framework_set_scheduler", 00:20:51.870 "params": { 00:20:51.870 "name": "static" 00:20:51.870 } 00:20:51.870 } 00:20:51.870 ] 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "subsystem": "nvmf", 00:20:51.870 "config": [ 00:20:51.870 { 00:20:51.870 "method": "nvmf_set_config", 00:20:51.870 "params": { 00:20:51.870 "discovery_filter": "match_any", 00:20:51.870 "admin_cmd_passthru": { 00:20:51.870 "identify_ctrlr": false 00:20:51.870 } 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "nvmf_set_max_subsystems", 00:20:51.870 "params": { 00:20:51.870 "max_subsystems": 1024 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "nvmf_set_crdt", 00:20:51.870 "params": { 00:20:51.870 "crdt1": 0, 00:20:51.870 "crdt2": 0, 00:20:51.870 "crdt3": 0 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "nvmf_create_transport", 00:20:51.870 "params": { 00:20:51.870 "trtype": "TCP", 00:20:51.870 "max_queue_depth": 128, 00:20:51.870 "max_io_qpairs_per_ctrlr": 127, 00:20:51.870 "in_capsule_data_size": 4096, 00:20:51.870 "max_io_size": 131072, 00:20:51.870 "io_unit_size": 131072, 00:20:51.870 "max_aq_depth": 128, 00:20:51.870 "num_shared_buffers": 511, 00:20:51.870 "buf_cache_size": 4294967295, 00:20:51.870 "dif_insert_or_strip": false, 00:20:51.870 "zcopy": false, 00:20:51.870 "c2h_success": false, 00:20:51.870 "sock_priority": 0, 00:20:51.870 "abort_timeout_sec": 1, 00:20:51.870 "ack_timeout": 0, 00:20:51.870 "data_wr_pool_size": 0 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "nvmf_create_subsystem", 00:20:51.870 "params": { 00:20:51.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.870 "allow_any_host": false, 00:20:51.870 "serial_number": "00000000000000000000", 00:20:51.870 "model_number": "SPDK bdev Controller", 00:20:51.870 "max_namespaces": 32, 00:20:51.870 "min_cntlid": 1, 00:20:51.870 "max_cntlid": 65519, 00:20:51.870 "ana_reporting": false 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "nvmf_subsystem_add_host", 00:20:51.870 "params": { 00:20:51.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.870 "host": "nqn.2016-06.io.spdk:host1", 00:20:51.870 "psk": "key0" 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "nvmf_subsystem_add_ns", 00:20:51.870 "params": { 00:20:51.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.870 "namespace": { 00:20:51.870 "nsid": 1, 00:20:51.870 "bdev_name": "malloc0", 00:20:51.870 "nguid": "AC368F3A07F74F8EA3741DED0402B3D3", 00:20:51.870 "uuid": "ac368f3a-07f7-4f8e-a374-1ded0402b3d3", 00:20:51.870 "no_auto_visible": false 00:20:51.870 } 00:20:51.870 } 00:20:51.870 }, 00:20:51.870 { 00:20:51.870 "method": "nvmf_subsystem_add_listener", 00:20:51.870 "params": { 00:20:51.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.870 "listen_address": { 00:20:51.870 "trtype": "TCP", 00:20:51.870 "adrfam": "IPv4", 00:20:51.870 "traddr": "10.0.0.2", 00:20:51.870 "trsvcid": "4420" 00:20:51.870 }, 00:20:51.870 "secure_channel": true 00:20:51.870 } 00:20:51.870 } 00:20:51.870 ] 00:20:51.870 } 00:20:51.870 ] 00:20:51.870 }' 00:20:51.870 12:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:52.128 12:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:52.128 "subsystems": [ 00:20:52.128 { 00:20:52.128 "subsystem": "keyring", 00:20:52.128 "config": [ 00:20:52.128 { 00:20:52.128 "method": "keyring_file_add_key", 00:20:52.128 "params": { 00:20:52.128 "name": "key0", 00:20:52.128 "path": "/tmp/tmp.HAVg2QEXy6" 00:20:52.128 } 00:20:52.128 } 00:20:52.128 ] 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "subsystem": "iobuf", 00:20:52.128 "config": [ 00:20:52.128 { 00:20:52.128 "method": "iobuf_set_options", 00:20:52.128 "params": { 00:20:52.128 "small_pool_count": 8192, 00:20:52.128 "large_pool_count": 1024, 00:20:52.128 "small_bufsize": 8192, 00:20:52.128 "large_bufsize": 135168 00:20:52.128 } 00:20:52.128 } 00:20:52.128 ] 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "subsystem": "sock", 00:20:52.128 "config": [ 00:20:52.128 { 00:20:52.128 "method": "sock_impl_set_options", 00:20:52.128 "params": { 00:20:52.128 "impl_name": "posix", 00:20:52.128 "recv_buf_size": 2097152, 00:20:52.128 "send_buf_size": 2097152, 00:20:52.128 "enable_recv_pipe": true, 00:20:52.128 "enable_quickack": false, 00:20:52.128 "enable_placement_id": 0, 00:20:52.128 "enable_zerocopy_send_server": true, 00:20:52.128 "enable_zerocopy_send_client": false, 00:20:52.128 "zerocopy_threshold": 0, 00:20:52.128 "tls_version": 0, 00:20:52.128 "enable_ktls": false 00:20:52.128 } 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "method": "sock_impl_set_options", 00:20:52.128 "params": { 00:20:52.128 "impl_name": "ssl", 00:20:52.128 "recv_buf_size": 4096, 00:20:52.128 "send_buf_size": 4096, 00:20:52.128 "enable_recv_pipe": true, 00:20:52.128 "enable_quickack": false, 00:20:52.128 "enable_placement_id": 0, 00:20:52.128 "enable_zerocopy_send_server": true, 00:20:52.128 "enable_zerocopy_send_client": false, 00:20:52.128 "zerocopy_threshold": 0, 00:20:52.128 "tls_version": 0, 00:20:52.128 "enable_ktls": false 00:20:52.128 } 00:20:52.128 } 00:20:52.128 ] 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "subsystem": "vmd", 00:20:52.128 "config": [] 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "subsystem": "accel", 00:20:52.128 "config": [ 00:20:52.128 { 00:20:52.128 "method": "accel_set_options", 00:20:52.128 "params": { 00:20:52.128 "small_cache_size": 128, 00:20:52.128 "large_cache_size": 16, 00:20:52.128 "task_count": 2048, 00:20:52.128 "sequence_count": 2048, 00:20:52.128 "buf_count": 2048 00:20:52.128 } 00:20:52.128 } 00:20:52.128 ] 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "subsystem": "bdev", 00:20:52.128 "config": [ 00:20:52.128 { 00:20:52.128 "method": "bdev_set_options", 00:20:52.128 "params": { 00:20:52.128 "bdev_io_pool_size": 65535, 00:20:52.128 "bdev_io_cache_size": 256, 00:20:52.128 "bdev_auto_examine": true, 00:20:52.128 "iobuf_small_cache_size": 128, 00:20:52.128 "iobuf_large_cache_size": 16 00:20:52.128 } 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "method": "bdev_raid_set_options", 00:20:52.128 "params": { 00:20:52.128 "process_window_size_kb": 1024 00:20:52.128 } 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "method": "bdev_iscsi_set_options", 00:20:52.128 "params": { 00:20:52.128 "timeout_sec": 30 00:20:52.128 } 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "method": "bdev_nvme_set_options", 00:20:52.128 "params": { 00:20:52.128 "action_on_timeout": "none", 00:20:52.128 "timeout_us": 0, 00:20:52.128 "timeout_admin_us": 0, 00:20:52.128 "keep_alive_timeout_ms": 10000, 00:20:52.128 "arbitration_burst": 0, 00:20:52.128 "low_priority_weight": 0, 00:20:52.128 "medium_priority_weight": 0, 00:20:52.128 "high_priority_weight": 0, 00:20:52.128 "nvme_adminq_poll_period_us": 10000, 00:20:52.128 "nvme_ioq_poll_period_us": 0, 00:20:52.128 "io_queue_requests": 512, 00:20:52.128 "delay_cmd_submit": true, 00:20:52.128 "transport_retry_count": 4, 00:20:52.128 "bdev_retry_count": 3, 00:20:52.128 "transport_ack_timeout": 0, 00:20:52.128 "ctrlr_loss_timeout_sec": 0, 00:20:52.128 "reconnect_delay_sec": 0, 00:20:52.128 "fast_io_fail_timeout_sec": 0, 00:20:52.128 "disable_auto_failback": false, 00:20:52.128 "generate_uuids": false, 00:20:52.128 "transport_tos": 0, 00:20:52.128 "nvme_error_stat": false, 00:20:52.128 "rdma_srq_size": 0, 00:20:52.128 "io_path_stat": false, 00:20:52.128 "allow_accel_sequence": false, 00:20:52.128 "rdma_max_cq_size": 0, 00:20:52.128 "rdma_cm_event_timeout_ms": 0, 00:20:52.128 "dhchap_digests": [ 00:20:52.128 "sha256", 00:20:52.128 "sha384", 00:20:52.128 "sha512" 00:20:52.128 ], 00:20:52.128 "dhchap_dhgroups": [ 00:20:52.128 "null", 00:20:52.128 "ffdhe2048", 00:20:52.128 "ffdhe3072", 00:20:52.128 "ffdhe4096", 00:20:52.128 "ffdhe6144", 00:20:52.128 "ffdhe8192" 00:20:52.128 ] 00:20:52.128 } 00:20:52.128 }, 00:20:52.128 { 00:20:52.128 "method": "bdev_nvme_attach_controller", 00:20:52.129 "params": { 00:20:52.129 "name": "nvme0", 00:20:52.129 "trtype": "TCP", 00:20:52.129 "adrfam": "IPv4", 00:20:52.129 "traddr": "10.0.0.2", 00:20:52.129 "trsvcid": "4420", 00:20:52.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.129 "prchk_reftag": false, 00:20:52.129 "prchk_guard": false, 00:20:52.129 "ctrlr_loss_timeout_sec": 0, 00:20:52.129 "reconnect_delay_sec": 0, 00:20:52.129 "fast_io_fail_timeout_sec": 0, 00:20:52.129 "psk": "key0", 00:20:52.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.129 "hdgst": false, 00:20:52.129 "ddgst": false 00:20:52.129 } 00:20:52.129 }, 00:20:52.129 { 00:20:52.129 "method": "bdev_nvme_set_hotplug", 00:20:52.129 "params": { 00:20:52.129 "period_us": 100000, 00:20:52.129 "enable": false 00:20:52.129 } 00:20:52.129 }, 00:20:52.129 { 00:20:52.129 "method": "bdev_enable_histogram", 00:20:52.129 "params": { 00:20:52.129 "name": "nvme0n1", 00:20:52.129 "enable": true 00:20:52.129 } 00:20:52.129 }, 00:20:52.129 { 00:20:52.129 "method": "bdev_wait_for_examine" 00:20:52.129 } 00:20:52.129 ] 00:20:52.129 }, 00:20:52.129 { 00:20:52.129 "subsystem": "nbd", 00:20:52.129 "config": [] 00:20:52.129 } 00:20:52.129 ] 00:20:52.129 }' 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2169902 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2169902 ']' 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2169902 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2169902 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2169902' 00:20:52.129 killing process with pid 2169902 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2169902 00:20:52.129 Received shutdown signal, test time was about 1.000000 seconds 00:20:52.129 00:20:52.129 Latency(us) 00:20:52.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.129 =================================================================================================================== 00:20:52.129 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.129 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2169902 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2169731 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2169731 ']' 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2169731 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2169731 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2169731' 00:20:52.387 killing process with pid 2169731 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2169731 00:20:52.387 [2024-05-15 12:22:20.724163] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:52.387 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2169731 00:20:52.645 12:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:52.645 12:22:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.645 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:52.645 12:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:52.645 "subsystems": [ 00:20:52.645 { 00:20:52.645 "subsystem": "keyring", 00:20:52.645 "config": [ 00:20:52.645 { 00:20:52.645 "method": "keyring_file_add_key", 00:20:52.645 "params": { 00:20:52.645 "name": "key0", 00:20:52.645 "path": "/tmp/tmp.HAVg2QEXy6" 00:20:52.645 } 00:20:52.645 } 00:20:52.645 ] 00:20:52.645 }, 00:20:52.645 { 00:20:52.645 "subsystem": "iobuf", 00:20:52.646 "config": [ 00:20:52.646 { 00:20:52.646 "method": "iobuf_set_options", 00:20:52.646 "params": { 00:20:52.646 "small_pool_count": 8192, 00:20:52.646 "large_pool_count": 1024, 00:20:52.646 "small_bufsize": 8192, 00:20:52.646 "large_bufsize": 135168 00:20:52.646 } 00:20:52.646 } 00:20:52.646 ] 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "subsystem": "sock", 00:20:52.646 "config": [ 00:20:52.646 { 00:20:52.646 "method": "sock_impl_set_options", 00:20:52.646 "params": { 00:20:52.646 "impl_name": "posix", 00:20:52.646 "recv_buf_size": 2097152, 00:20:52.646 "send_buf_size": 2097152, 00:20:52.646 "enable_recv_pipe": true, 00:20:52.646 "enable_quickack": false, 00:20:52.646 "enable_placement_id": 0, 00:20:52.646 "enable_zerocopy_send_server": true, 00:20:52.646 "enable_zerocopy_send_client": false, 00:20:52.646 "zerocopy_threshold": 0, 00:20:52.646 "tls_version": 0, 00:20:52.646 "enable_ktls": false 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "sock_impl_set_options", 00:20:52.646 "params": { 00:20:52.646 "impl_name": "ssl", 00:20:52.646 "recv_buf_size": 4096, 00:20:52.646 "send_buf_size": 4096, 00:20:52.646 "enable_recv_pipe": true, 00:20:52.646 "enable_quickack": false, 00:20:52.646 "enable_placement_id": 0, 00:20:52.646 "enable_zerocopy_send_server": true, 00:20:52.646 "enable_zerocopy_send_client": false, 00:20:52.646 "zerocopy_threshold": 0, 00:20:52.646 "tls_version": 0, 00:20:52.646 "enable_ktls": false 00:20:52.646 } 00:20:52.646 } 00:20:52.646 ] 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "subsystem": "vmd", 00:20:52.646 "config": [] 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "subsystem": "accel", 00:20:52.646 "config": [ 00:20:52.646 { 00:20:52.646 "method": "accel_set_options", 00:20:52.646 "params": { 00:20:52.646 "small_cache_size": 128, 00:20:52.646 "large_cache_size": 16, 00:20:52.646 "task_count": 2048, 00:20:52.646 "sequence_count": 2048, 00:20:52.646 "buf_count": 2048 00:20:52.646 } 00:20:52.646 } 00:20:52.646 ] 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "subsystem": "bdev", 00:20:52.646 "config": [ 00:20:52.646 { 00:20:52.646 "method": "bdev_set_options", 00:20:52.646 "params": { 00:20:52.646 "bdev_io_pool_size": 65535, 00:20:52.646 "bdev_io_cache_size": 256, 00:20:52.646 "bdev_auto_examine": true, 00:20:52.646 "iobuf_small_cache_size": 128, 00:20:52.646 "iobuf_large_cache_size": 16 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "bdev_raid_set_options", 00:20:52.646 "params": { 00:20:52.646 "process_window_size_kb": 1024 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "bdev_iscsi_set_options", 00:20:52.646 "params": { 00:20:52.646 "timeout_sec": 30 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "bdev_nvme_set_options", 00:20:52.646 "params": { 00:20:52.646 "action_on_timeout": "none", 00:20:52.646 "timeout_us": 0, 00:20:52.646 "timeout_admin_us": 0, 00:20:52.646 "keep_alive_timeout_ms": 10000, 00:20:52.646 "arbitration_burst": 0, 00:20:52.646 "low_priority_weight": 0, 00:20:52.646 "medium_priority_weight": 0, 00:20:52.646 "high_priority_weight": 0, 00:20:52.646 "nvme_adminq_poll_period_us": 10000, 00:20:52.646 "nvme_ioq_poll_period_us": 0, 00:20:52.646 "io_queue_requests": 0, 00:20:52.646 "delay_cmd_submit": true, 00:20:52.646 "transport_retry_count": 4, 00:20:52.646 "bdev_retry_count": 3, 00:20:52.646 "transport_ack_timeout": 0, 00:20:52.646 "ctrlr_loss_timeout_sec": 0, 00:20:52.646 "reconnect_delay_sec": 0, 00:20:52.646 "fast_io_fail_timeout_sec": 0, 00:20:52.646 "disable_auto_failback": false, 00:20:52.646 "generate_uuids": false, 00:20:52.646 "transport_tos": 0, 00:20:52.646 "nvme_error_stat": false, 00:20:52.646 "rdma_srq_size": 0, 00:20:52.646 "io_path_stat": false, 00:20:52.646 "allow_accel_sequence": false, 00:20:52.646 "rdma_max_cq_size": 0, 00:20:52.646 "rdma_cm_event_timeout_ms": 0, 00:20:52.646 "dhchap_digests": [ 00:20:52.646 "sha256", 00:20:52.646 "sha384", 00:20:52.646 "sha512" 00:20:52.646 ], 00:20:52.646 "dhchap_dhgroups": [ 00:20:52.646 "null", 00:20:52.646 "ffdhe2048", 00:20:52.646 "ffdhe3072", 00:20:52.646 "ffdhe4096", 00:20:52.646 "ffdhe6144", 00:20:52.646 "ffdhe8192" 00:20:52.646 ] 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "bdev_nvme_set_hotplug", 00:20:52.646 "params": { 00:20:52.646 "period_us": 100000, 00:20:52.646 "enable": false 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "bdev_malloc_create", 00:20:52.646 "params": { 00:20:52.646 "name": "malloc0", 00:20:52.646 "num_blocks": 8192, 00:20:52.646 "block_size": 4096, 00:20:52.646 "physical_block_size": 4096, 00:20:52.646 "uuid": "ac368f3a-07f7-4f8e-a374-1ded0402b3d3", 00:20:52.646 "optimal_io_boundary": 0 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "bdev_wait_for_examine" 00:20:52.646 } 00:20:52.646 ] 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "subsystem": "nbd", 00:20:52.646 "config": [] 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "subsystem": "scheduler", 00:20:52.646 "config": [ 00:20:52.646 { 00:20:52.646 "method": "framework_set_scheduler", 00:20:52.646 "params": { 00:20:52.646 "name": "static" 00:20:52.646 } 00:20:52.646 } 00:20:52.646 ] 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "subsystem": "nvmf", 00:20:52.646 "config": [ 00:20:52.646 { 00:20:52.646 "method": "nvmf_set_config", 00:20:52.646 "params": { 00:20:52.646 "discovery_filter": "match_any", 00:20:52.646 "admin_cmd_passthru": { 00:20:52.646 "identify_ctrlr": false 00:20:52.646 } 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "nvmf_set_max_subsystems", 00:20:52.646 "params": { 00:20:52.646 "max_subsystems": 1024 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "nvmf_set_crdt", 00:20:52.646 "params": { 00:20:52.646 "crdt1": 0, 00:20:52.646 "crdt2": 0, 00:20:52.646 "crdt3": 0 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "nvmf_create_transport", 00:20:52.646 "params": { 00:20:52.646 "trtype": "TCP", 00:20:52.646 "max_queue_depth": 128, 00:20:52.646 "max_io_qpairs_per_ctrlr": 127, 00:20:52.646 "in_capsule_data_size": 4096, 00:20:52.646 "max_io_size": 131072, 00:20:52.646 "io_unit_size": 131072, 00:20:52.646 "max_aq_depth": 128, 00:20:52.646 "num_shared_buffers": 511, 00:20:52.646 "buf_cache_size": 4294967295, 00:20:52.646 "dif_insert_or_strip": false, 00:20:52.646 "zcopy": false, 00:20:52.646 "c2h_success": false, 00:20:52.646 "sock_priority": 0, 00:20:52.646 "abort_timeout_sec": 1, 00:20:52.646 "ack_timeout": 0, 00:20:52.646 "data_wr_pool_size": 0 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "nvmf_create_subsystem", 00:20:52.646 "params": { 00:20:52.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.646 "allow_any_host": false, 00:20:52.646 "serial_number": "00000000000000000000", 00:20:52.646 "model_number": "SPDK bdev Controller", 00:20:52.646 "max_namespaces": 32, 00:20:52.646 "min_cntlid": 1, 00:20:52.646 "max_cntlid": 65519, 00:20:52.646 "ana_reporting": false 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "nvmf_subsystem_add_host", 00:20:52.646 "params": { 00:20:52.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.646 "host": "nqn.2016-06.io.spdk:host1", 00:20:52.646 "psk": "key0" 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "nvmf_subsystem_add_ns", 00:20:52.646 "params": { 00:20:52.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.646 "namespace": { 00:20:52.646 "nsid": 1, 00:20:52.646 "bdev_name": "malloc0", 00:20:52.646 "nguid": "AC368F3A07F74F8EA3741DED0402B3D3", 00:20:52.646 "uuid": "ac368f3a-07f7-4f8e-a374-1ded0402b3d3", 00:20:52.646 "no_auto_visible": false 00:20:52.646 } 00:20:52.646 } 00:20:52.646 }, 00:20:52.646 { 00:20:52.646 "method": "nvmf_subsystem_add_listener", 00:20:52.646 "params": { 00:20:52.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.646 "listen_address": { 00:20:52.646 "trtype": "TCP", 00:20:52.646 "adrfam": "IPv4", 00:20:52.646 "traddr": "10.0.0.2", 00:20:52.646 "trsvcid": "4420" 00:20:52.646 }, 00:20:52.646 "secure_channel": true 00:20:52.646 } 00:20:52.646 } 00:20:52.646 ] 00:20:52.646 } 00:20:52.646 ] 00:20:52.646 }' 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2170472 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2170472 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2170472 ']' 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.646 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:52.647 12:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.647 [2024-05-15 12:22:21.000062] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:52.647 [2024-05-15 12:22:21.000111] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.647 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.647 [2024-05-15 12:22:21.072762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.647 [2024-05-15 12:22:21.146785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.647 [2024-05-15 12:22:21.146820] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.647 [2024-05-15 12:22:21.146830] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.647 [2024-05-15 12:22:21.146839] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.647 [2024-05-15 12:22:21.146846] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.647 [2024-05-15 12:22:21.146915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.905 [2024-05-15 12:22:21.349287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.905 [2024-05-15 12:22:21.381283] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:52.905 [2024-05-15 12:22:21.381342] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:52.905 [2024-05-15 12:22:21.395334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.471 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:53.471 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:53.471 12:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.471 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2170748 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2170748 /var/tmp/bdevperf.sock 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2170748 ']' 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:53.472 "subsystems": [ 00:20:53.472 { 00:20:53.472 "subsystem": "keyring", 00:20:53.472 "config": [ 00:20:53.472 { 00:20:53.472 "method": "keyring_file_add_key", 00:20:53.472 "params": { 00:20:53.472 "name": "key0", 00:20:53.472 "path": "/tmp/tmp.HAVg2QEXy6" 00:20:53.472 } 00:20:53.472 } 00:20:53.472 ] 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "subsystem": "iobuf", 00:20:53.472 "config": [ 00:20:53.472 { 00:20:53.472 "method": "iobuf_set_options", 00:20:53.472 "params": { 00:20:53.472 "small_pool_count": 8192, 00:20:53.472 "large_pool_count": 1024, 00:20:53.472 "small_bufsize": 8192, 00:20:53.472 "large_bufsize": 135168 00:20:53.472 } 00:20:53.472 } 00:20:53.472 ] 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "subsystem": "sock", 00:20:53.472 "config": [ 00:20:53.472 { 00:20:53.472 "method": "sock_impl_set_options", 00:20:53.472 "params": { 00:20:53.472 "impl_name": "posix", 00:20:53.472 "recv_buf_size": 2097152, 00:20:53.472 "send_buf_size": 2097152, 00:20:53.472 "enable_recv_pipe": true, 00:20:53.472 "enable_quickack": false, 00:20:53.472 "enable_placement_id": 0, 00:20:53.472 "enable_zerocopy_send_server": true, 00:20:53.472 "enable_zerocopy_send_client": false, 00:20:53.472 "zerocopy_threshold": 0, 00:20:53.472 "tls_version": 0, 00:20:53.472 "enable_ktls": false 00:20:53.472 } 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "method": "sock_impl_set_options", 00:20:53.472 "params": { 00:20:53.472 "impl_name": "ssl", 00:20:53.472 "recv_buf_size": 4096, 00:20:53.472 "send_buf_size": 4096, 00:20:53.472 "enable_recv_pipe": true, 00:20:53.472 "enable_quickack": false, 00:20:53.472 "enable_placement_id": 0, 00:20:53.472 "enable_zerocopy_send_server": true, 00:20:53.472 "enable_zerocopy_send_client": false, 00:20:53.472 "zerocopy_threshold": 0, 00:20:53.472 "tls_version": 0, 00:20:53.472 "enable_ktls": false 00:20:53.472 } 00:20:53.472 } 00:20:53.472 ] 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "subsystem": "vmd", 00:20:53.472 "config": [] 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "subsystem": "accel", 00:20:53.472 "config": [ 00:20:53.472 { 00:20:53.472 "method": "accel_set_options", 00:20:53.472 "params": { 00:20:53.472 "small_cache_size": 128, 00:20:53.472 "large_cache_size": 16, 00:20:53.472 "task_count": 2048, 00:20:53.472 "sequence_count": 2048, 00:20:53.472 "buf_count": 2048 00:20:53.472 } 00:20:53.472 } 00:20:53.472 ] 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "subsystem": "bdev", 00:20:53.472 "config": [ 00:20:53.472 { 00:20:53.472 "method": "bdev_set_options", 00:20:53.472 "params": { 00:20:53.472 "bdev_io_pool_size": 65535, 00:20:53.472 "bdev_io_cache_size": 256, 00:20:53.472 "bdev_auto_examine": true, 00:20:53.472 "iobuf_small_cache_size": 128, 00:20:53.472 "iobuf_large_cache_size": 16 00:20:53.472 } 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "method": "bdev_raid_set_options", 00:20:53.472 "params": { 00:20:53.472 "process_window_size_kb": 1024 00:20:53.472 } 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "method": "bdev_iscsi_set_options", 00:20:53.472 "params": { 00:20:53.472 "timeout_sec": 30 00:20:53.472 } 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "method": "bdev_nvme_set_options", 00:20:53.472 "params": { 00:20:53.472 "action_on_timeout": "none", 00:20:53.472 "timeout_us": 0, 00:20:53.472 "timeout_admin_us": 0, 00:20:53.472 "keep_alive_timeout_ms": 10000, 00:20:53.472 "arbitration_burst": 0, 00:20:53.472 "low_priority_weight": 0, 00:20:53.472 "medium_priority_weight": 0, 00:20:53.472 "high_priority_weight": 0, 00:20:53.472 "nvme_adminq_poll_period_us": 10000, 00:20:53.472 "nvme_ioq_poll_period_us": 0, 00:20:53.472 "io_queue_requests": 512, 00:20:53.472 "delay_cmd_submit": true, 00:20:53.472 "transport_retry_count": 4, 00:20:53.472 "bdev_retry_count": 3, 00:20:53.472 "transport_ack_timeout": 0, 00:20:53.472 "ctrlr_loss_timeout_sec": 0, 00:20:53.472 "reconnect_delay_sec": 0, 00:20:53.472 "fast_io_fail_timeout_sec": 0, 00:20:53.472 "disable_auto_failback": false, 00:20:53.472 "generate_uuids": false, 00:20:53.472 "transport_tos": 0, 00:20:53.472 "nvme_error_stat": false, 00:20:53.472 "rdma_srq_size": 0, 00:20:53.472 "io_path_stat": false, 00:20:53.472 "allow_accel_sequence": false, 00:20:53.472 "rdma_max_cq_size": 0, 00:20:53.472 "rdma_cm_event_timeout_ms": 0, 00:20:53.472 "dhchap_digests": [ 00:20:53.472 "sha256", 00:20:53.472 "sha384", 00:20:53.472 "sha512" 00:20:53.472 ], 00:20:53.472 "dhchap_dhgroups": [ 00:20:53.472 "null", 00:20:53.472 "ffdhe2048", 00:20:53.472 "ffdhe3072", 00:20:53.472 "ffdhe4096", 00:20:53.472 "ffdhe6144", 00:20:53.472 "ffdhe8192" 00:20:53.472 ] 00:20:53.472 } 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "method": "bdev_nvme_attach_controller", 00:20:53.472 "params": { 00:20:53.472 "name": "nvme0", 00:20:53.472 "trtype": "TCP", 00:20:53.472 "adrfam": "IPv4", 00:20:53.472 "traddr": "10.0.0.2", 00:20:53.472 "trsvcid": "4420", 00:20:53.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.472 "prchk_reftag": false, 00:20:53.472 "prchk_guard": false, 00:20:53.472 "ctrlr_loss_timeout_sec": 0, 00:20:53.472 "reconnect_delay_sec": 0, 00:20:53.472 "fast_io_fail_timeout_sec": 0, 00:20:53.472 "psk": "key0", 00:20:53.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.472 "hdgst": false, 00:20:53.472 "ddgst": false 00:20:53.472 } 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "method": "bdev_nvme_set_hotplug", 00:20:53.472 "params": { 00:20:53.472 "period_us": 100000, 00:20:53.472 "enable": false 00:20:53.472 } 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "method": "bdev_enable_histogram", 00:20:53.472 "params": { 00:20:53.472 "name": "nvme0n1", 00:20:53.472 "enable": true 00:20:53.472 } 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "method": "bdev_wait_for_examine" 00:20:53.472 } 00:20:53.472 ] 00:20:53.472 }, 00:20:53.472 { 00:20:53.472 "subsystem": "nbd", 00:20:53.472 "config": [] 00:20:53.472 } 00:20:53.472 ] 00:20:53.472 }' 00:20:53.472 12:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.472 [2024-05-15 12:22:21.887095] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:20:53.472 [2024-05-15 12:22:21.887146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170748 ] 00:20:53.472 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.472 [2024-05-15 12:22:21.955077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.731 [2024-05-15 12:22:22.031131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.731 [2024-05-15 12:22:22.174216] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.297 12:22:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:54.297 12:22:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:20:54.297 12:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.297 12:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:54.555 12:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.555 12:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:54.555 Running I/O for 1 seconds... 00:20:55.488 00:20:55.488 Latency(us) 00:20:55.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.488 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:55.488 Verification LBA range: start 0x0 length 0x2000 00:20:55.488 nvme0n1 : 1.06 1832.33 7.16 0.00 0.00 68252.81 7077.89 104438.17 00:20:55.488 =================================================================================================================== 00:20:55.488 Total : 1832.33 7.16 0.00 0.00 68252.81 7077.89 104438.17 00:20:55.747 0 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:55.747 nvmf_trace.0 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2170748 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2170748 ']' 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2170748 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2170748 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2170748' 00:20:55.747 killing process with pid 2170748 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2170748 00:20:55.747 Received shutdown signal, test time was about 1.000000 seconds 00:20:55.747 00:20:55.747 Latency(us) 00:20:55.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.747 =================================================================================================================== 00:20:55.747 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.747 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2170748 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.006 rmmod nvme_tcp 00:20:56.006 rmmod nvme_fabrics 00:20:56.006 rmmod nvme_keyring 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2170472 ']' 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2170472 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2170472 ']' 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2170472 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2170472 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2170472' 00:20:56.006 killing process with pid 2170472 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2170472 00:20:56.006 [2024-05-15 12:22:24.506265] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:56.006 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2170472 00:20:56.264 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:56.264 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:56.264 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:56.264 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:56.264 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:56.264 12:22:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.264 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.264 12:22:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.801 12:22:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:58.801 12:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ta5dgaKo65 /tmp/tmp.HwU5N4nqbG /tmp/tmp.HAVg2QEXy6 00:20:58.801 00:20:58.801 real 1m27.204s 00:20:58.801 user 2m9.009s 00:20:58.801 sys 0m33.882s 00:20:58.801 12:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:58.801 12:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.801 ************************************ 00:20:58.801 END TEST nvmf_tls 00:20:58.801 ************************************ 00:20:58.801 12:22:26 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:58.801 12:22:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:20:58.801 12:22:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:58.801 12:22:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:58.801 ************************************ 00:20:58.801 START TEST nvmf_fips 00:20:58.801 ************************************ 00:20:58.801 12:22:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:58.801 * Looking for test storage... 00:20:58.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:58.801 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:20:58.802 Error setting digest 00:20:58.802 00C2ACDDB57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:58.802 00C2ACDDB57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:58.802 12:22:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.403 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:05.404 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:05.404 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:05.404 Found net devices under 0000:af:00.0: cvl_0_0 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:05.404 Found net devices under 0000:af:00.1: cvl_0_1 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.404 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.662 12:22:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:21:05.662 00:21:05.662 --- 10.0.0.2 ping statistics --- 00:21:05.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.662 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:21:05.662 00:21:05.662 --- 10.0.0.1 ping statistics --- 00:21:05.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.662 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2174954 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2174954 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2174954 ']' 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:05.662 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:05.662 [2024-05-15 12:22:34.163450] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:21:05.662 [2024-05-15 12:22:34.163501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.920 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.920 [2024-05-15 12:22:34.236420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.920 [2024-05-15 12:22:34.304404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.920 [2024-05-15 12:22:34.304444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.920 [2024-05-15 12:22:34.304453] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.920 [2024-05-15 12:22:34.304461] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.920 [2024-05-15 12:22:34.304467] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.920 [2024-05-15 12:22:34.304488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:06.501 12:22:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:06.778 [2024-05-15 12:22:35.134851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.778 [2024-05-15 12:22:35.150827] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:06.778 [2024-05-15 12:22:35.150872] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.778 [2024-05-15 12:22:35.151061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.778 [2024-05-15 12:22:35.179259] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:06.778 malloc0 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2175045 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2175045 /var/tmp/bdevperf.sock 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2175045 ']' 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:06.778 12:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:06.778 [2024-05-15 12:22:35.259834] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:21:06.778 [2024-05-15 12:22:35.259885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175045 ] 00:21:06.778 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.036 [2024-05-15 12:22:35.327573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.036 [2024-05-15 12:22:35.401561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.600 12:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:07.600 12:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:21:07.601 12:22:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:07.858 [2024-05-15 12:22:36.185349] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.858 [2024-05-15 12:22:36.185445] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:07.858 TLSTESTn1 00:21:07.858 12:22:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:07.858 Running I/O for 10 seconds... 00:21:20.052 00:21:20.052 Latency(us) 00:21:20.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.052 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.052 Verification LBA range: start 0x0 length 0x2000 00:21:20.052 TLSTESTn1 : 10.06 1872.87 7.32 0.00 0.00 68164.40 5321.52 109890.76 00:21:20.052 =================================================================================================================== 00:21:20.052 Total : 1872.87 7.32 0.00 0.00 68164.40 5321.52 109890.76 00:21:20.052 0 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:20.052 nvmf_trace.0 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2175045 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2175045 ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2175045 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2175045 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2175045' 00:21:20.052 killing process with pid 2175045 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2175045 00:21:20.052 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.052 00:21:20.052 Latency(us) 00:21:20.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.052 =================================================================================================================== 00:21:20.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.052 [2024-05-15 12:22:46.601346] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2175045 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:20.052 rmmod nvme_tcp 00:21:20.052 rmmod nvme_fabrics 00:21:20.052 rmmod nvme_keyring 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2174954 ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2174954 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2174954 ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2174954 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2174954 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2174954' 00:21:20.052 killing process with pid 2174954 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2174954 00:21:20.052 [2024-05-15 12:22:46.934704] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:20.052 [2024-05-15 12:22:46.934746] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:20.052 12:22:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2174954 00:21:20.052 12:22:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:20.052 12:22:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:20.052 12:22:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:20.052 12:22:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.052 12:22:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:20.052 12:22:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.052 12:22:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.052 12:22:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.986 12:22:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.986 12:22:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:20.986 00:21:20.986 real 0m22.321s 00:21:20.986 user 0m22.278s 00:21:20.986 sys 0m10.908s 00:21:20.986 12:22:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:20.986 12:22:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:20.986 ************************************ 00:21:20.986 END TEST nvmf_fips 00:21:20.986 ************************************ 00:21:20.986 12:22:49 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:20.986 12:22:49 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:20.986 12:22:49 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:20.986 12:22:49 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:20.986 12:22:49 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.986 12:22:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.543 12:22:55 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.543 12:22:55 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.543 12:22:55 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.543 12:22:55 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.543 12:22:55 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.543 12:22:55 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:27.544 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:27.544 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:27.544 Found net devices under 0000:af:00.0: cvl_0_0 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:27.544 Found net devices under 0000:af:00.1: cvl_0_1 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:27.544 12:22:55 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:27.544 12:22:55 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:27.544 12:22:55 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:27.544 12:22:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.544 ************************************ 00:21:27.544 START TEST nvmf_perf_adq 00:21:27.544 ************************************ 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:27.544 * Looking for test storage... 00:21:27.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.544 12:22:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.545 12:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:34.107 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:34.107 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:34.107 Found net devices under 0000:af:00.0: cvl_0_0 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:34.107 Found net devices under 0000:af:00.1: cvl_0_1 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.107 12:23:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:34.108 12:23:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:34.108 12:23:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:34.108 12:23:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:35.041 12:23:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:37.573 12:23:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:42.843 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:42.843 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:42.843 Found net devices under 0000:af:00.0: cvl_0_0 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:42.843 Found net devices under 0000:af:00.1: cvl_0_1 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.843 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:21:42.843 00:21:42.843 --- 10.0.0.2 ping statistics --- 00:21:42.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.844 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:21:42.844 00:21:42.844 --- 10.0.0.1 ping statistics --- 00:21:42.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.844 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.844 12:23:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2185503 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2185503 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 2185503 ']' 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:42.844 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.844 [2024-05-15 12:23:11.064374] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:21:42.844 [2024-05-15 12:23:11.064422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.844 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.844 [2024-05-15 12:23:11.138674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.844 [2024-05-15 12:23:11.216177] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.844 [2024-05-15 12:23:11.216220] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.844 [2024-05-15 12:23:11.216229] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.844 [2024-05-15 12:23:11.216238] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.844 [2024-05-15 12:23:11.216261] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.844 [2024-05-15 12:23:11.216310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.844 [2024-05-15 12:23:11.216427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.844 [2024-05-15 12:23:11.216516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.844 [2024-05-15 12:23:11.216518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.408 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.666 12:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:43.666 12:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:43.666 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.666 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.666 12:23:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:43.666 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.666 12:23:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 [2024-05-15 12:23:12.069699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 Malloc1 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 [2024-05-15 12:23:12.119914] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:43.666 [2024-05-15 12:23:12.120164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2185788 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:43.666 12:23:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:43.666 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:46.195 "tick_rate": 2500000000, 00:21:46.195 "poll_groups": [ 00:21:46.195 { 00:21:46.195 "name": "nvmf_tgt_poll_group_000", 00:21:46.195 "admin_qpairs": 1, 00:21:46.195 "io_qpairs": 1, 00:21:46.195 "current_admin_qpairs": 1, 00:21:46.195 "current_io_qpairs": 1, 00:21:46.195 "pending_bdev_io": 0, 00:21:46.195 "completed_nvme_io": 19542, 00:21:46.195 "transports": [ 00:21:46.195 { 00:21:46.195 "trtype": "TCP" 00:21:46.195 } 00:21:46.195 ] 00:21:46.195 }, 00:21:46.195 { 00:21:46.195 "name": "nvmf_tgt_poll_group_001", 00:21:46.195 "admin_qpairs": 0, 00:21:46.195 "io_qpairs": 1, 00:21:46.195 "current_admin_qpairs": 0, 00:21:46.195 "current_io_qpairs": 1, 00:21:46.195 "pending_bdev_io": 0, 00:21:46.195 "completed_nvme_io": 19656, 00:21:46.195 "transports": [ 00:21:46.195 { 00:21:46.195 "trtype": "TCP" 00:21:46.195 } 00:21:46.195 ] 00:21:46.195 }, 00:21:46.195 { 00:21:46.195 "name": "nvmf_tgt_poll_group_002", 00:21:46.195 "admin_qpairs": 0, 00:21:46.195 "io_qpairs": 1, 00:21:46.195 "current_admin_qpairs": 0, 00:21:46.195 "current_io_qpairs": 1, 00:21:46.195 "pending_bdev_io": 0, 00:21:46.195 "completed_nvme_io": 19482, 00:21:46.195 "transports": [ 00:21:46.195 { 00:21:46.195 "trtype": "TCP" 00:21:46.195 } 00:21:46.195 ] 00:21:46.195 }, 00:21:46.195 { 00:21:46.195 "name": "nvmf_tgt_poll_group_003", 00:21:46.195 "admin_qpairs": 0, 00:21:46.195 "io_qpairs": 1, 00:21:46.195 "current_admin_qpairs": 0, 00:21:46.195 "current_io_qpairs": 1, 00:21:46.195 "pending_bdev_io": 0, 00:21:46.195 "completed_nvme_io": 19733, 00:21:46.195 "transports": [ 00:21:46.195 { 00:21:46.195 "trtype": "TCP" 00:21:46.195 } 00:21:46.195 ] 00:21:46.195 } 00:21:46.195 ] 00:21:46.195 }' 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:46.195 12:23:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2185788 00:21:54.300 Initializing NVMe Controllers 00:21:54.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:54.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:54.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:54.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:54.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:54.300 Initialization complete. Launching workers. 00:21:54.300 ======================================================== 00:21:54.300 Latency(us) 00:21:54.300 Device Information : IOPS MiB/s Average min max 00:21:54.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10418.50 40.70 6144.28 1626.15 10116.60 00:21:54.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10453.10 40.83 6142.77 1618.10 47757.03 00:21:54.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10481.50 40.94 6106.34 1600.33 12008.16 00:21:54.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10447.30 40.81 6126.28 1597.22 10738.36 00:21:54.300 ======================================================== 00:21:54.300 Total : 41800.39 163.28 6129.89 1597.22 47757.03 00:21:54.300 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.300 rmmod nvme_tcp 00:21:54.300 rmmod nvme_fabrics 00:21:54.300 rmmod nvme_keyring 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2185503 ']' 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2185503 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 2185503 ']' 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 2185503 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2185503 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2185503' 00:21:54.300 killing process with pid 2185503 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 2185503 00:21:54.300 [2024-05-15 12:23:22.463112] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 2185503 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.300 12:23:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.829 12:23:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.829 12:23:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:56.829 12:23:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:57.764 12:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:00.294 12:23:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:05.602 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.602 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:05.603 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:05.603 Found net devices under 0000:af:00.0: cvl_0_0 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:05.603 Found net devices under 0000:af:00.1: cvl_0_1 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:05.603 00:22:05.603 --- 10.0.0.2 ping statistics --- 00:22:05.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.603 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:22:05.603 00:22:05.603 --- 10.0.0.1 ping statistics --- 00:22:05.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.603 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:05.603 net.core.busy_poll = 1 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:05.603 net.core.busy_read = 1 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2189762 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2189762 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 2189762 ']' 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.603 12:23:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:05.603 [2024-05-15 12:23:33.909907] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:22:05.603 [2024-05-15 12:23:33.909958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.603 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.603 [2024-05-15 12:23:33.984817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.603 [2024-05-15 12:23:34.061595] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.603 [2024-05-15 12:23:34.061634] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.604 [2024-05-15 12:23:34.061646] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.604 [2024-05-15 12:23:34.061654] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.604 [2024-05-15 12:23:34.061678] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.604 [2024-05-15 12:23:34.061725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.604 [2024-05-15 12:23:34.061820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.604 [2024-05-15 12:23:34.061917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.604 [2024-05-15 12:23:34.061919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 [2024-05-15 12:23:34.902996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 Malloc1 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.540 [2024-05-15 12:23:34.949448] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:06.540 [2024-05-15 12:23:34.949701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2189923 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:06.540 12:23:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.540 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.443 12:23:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:08.443 12:23:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:08.443 12:23:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.701 12:23:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:08.701 12:23:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:08.701 "tick_rate": 2500000000, 00:22:08.701 "poll_groups": [ 00:22:08.701 { 00:22:08.701 "name": "nvmf_tgt_poll_group_000", 00:22:08.701 "admin_qpairs": 1, 00:22:08.701 "io_qpairs": 1, 00:22:08.701 "current_admin_qpairs": 1, 00:22:08.702 "current_io_qpairs": 1, 00:22:08.702 "pending_bdev_io": 0, 00:22:08.702 "completed_nvme_io": 21497, 00:22:08.702 "transports": [ 00:22:08.702 { 00:22:08.702 "trtype": "TCP" 00:22:08.702 } 00:22:08.702 ] 00:22:08.702 }, 00:22:08.702 { 00:22:08.702 "name": "nvmf_tgt_poll_group_001", 00:22:08.702 "admin_qpairs": 0, 00:22:08.702 "io_qpairs": 3, 00:22:08.702 "current_admin_qpairs": 0, 00:22:08.702 "current_io_qpairs": 3, 00:22:08.702 "pending_bdev_io": 0, 00:22:08.702 "completed_nvme_io": 30455, 00:22:08.702 "transports": [ 00:22:08.702 { 00:22:08.702 "trtype": "TCP" 00:22:08.702 } 00:22:08.702 ] 00:22:08.702 }, 00:22:08.702 { 00:22:08.702 "name": "nvmf_tgt_poll_group_002", 00:22:08.702 "admin_qpairs": 0, 00:22:08.702 "io_qpairs": 0, 00:22:08.702 "current_admin_qpairs": 0, 00:22:08.702 "current_io_qpairs": 0, 00:22:08.702 "pending_bdev_io": 0, 00:22:08.702 "completed_nvme_io": 0, 00:22:08.702 "transports": [ 00:22:08.702 { 00:22:08.702 "trtype": "TCP" 00:22:08.702 } 00:22:08.702 ] 00:22:08.702 }, 00:22:08.702 { 00:22:08.702 "name": "nvmf_tgt_poll_group_003", 00:22:08.702 "admin_qpairs": 0, 00:22:08.702 "io_qpairs": 0, 00:22:08.702 "current_admin_qpairs": 0, 00:22:08.702 "current_io_qpairs": 0, 00:22:08.702 "pending_bdev_io": 0, 00:22:08.702 "completed_nvme_io": 0, 00:22:08.702 "transports": [ 00:22:08.702 { 00:22:08.702 "trtype": "TCP" 00:22:08.702 } 00:22:08.702 ] 00:22:08.702 } 00:22:08.702 ] 00:22:08.702 }' 00:22:08.702 12:23:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:08.702 12:23:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:08.702 12:23:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:08.702 12:23:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:08.702 12:23:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2189923 00:22:16.814 Initializing NVMe Controllers 00:22:16.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:16.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:16.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:16.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:16.814 Initialization complete. Launching workers. 00:22:16.814 ======================================================== 00:22:16.814 Latency(us) 00:22:16.814 Device Information : IOPS MiB/s Average min max 00:22:16.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5821.70 22.74 10994.48 1937.76 57788.87 00:22:16.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11693.29 45.68 5473.91 1719.81 11638.54 00:22:16.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5373.80 20.99 11910.80 1966.52 57195.89 00:22:16.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5356.80 20.92 11946.86 1992.27 56097.97 00:22:16.814 ======================================================== 00:22:16.814 Total : 28245.58 110.33 9063.99 1719.81 57788.87 00:22:16.814 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.814 rmmod nvme_tcp 00:22:16.814 rmmod nvme_fabrics 00:22:16.814 rmmod nvme_keyring 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2189762 ']' 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2189762 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 2189762 ']' 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 2189762 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2189762 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2189762' 00:22:16.814 killing process with pid 2189762 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 2189762 00:22:16.814 [2024-05-15 12:23:45.283902] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:16.814 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 2189762 00:22:17.074 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:17.074 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:17.074 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:17.074 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:17.074 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:17.074 12:23:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.074 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.074 12:23:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.364 12:23:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.364 12:23:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:20.364 00:22:20.364 real 0m52.994s 00:22:20.364 user 2m46.718s 00:22:20.364 sys 0m14.255s 00:22:20.364 12:23:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:20.364 12:23:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:20.364 ************************************ 00:22:20.364 END TEST nvmf_perf_adq 00:22:20.364 ************************************ 00:22:20.364 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:20.364 12:23:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:20.365 12:23:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:20.365 12:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.365 ************************************ 00:22:20.365 START TEST nvmf_shutdown 00:22:20.365 ************************************ 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:20.365 * Looking for test storage... 00:22:20.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:20.365 ************************************ 00:22:20.365 START TEST nvmf_shutdown_tc1 00:22:20.365 ************************************ 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc1 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.365 12:23:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:28.478 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:28.478 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.478 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:28.478 Found net devices under 0000:af:00.0: cvl_0_0 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:28.479 Found net devices under 0000:af:00.1: cvl_0_1 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:22:28.479 00:22:28.479 --- 10.0.0.2 ping statistics --- 00:22:28.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.479 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:22:28.479 00:22:28.479 --- 10.0.0.1 ping statistics --- 00:22:28.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.479 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2195600 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2195600 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 2195600 ']' 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:28.479 12:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.479 [2024-05-15 12:23:55.941722] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:22:28.479 [2024-05-15 12:23:55.941774] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.479 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.479 [2024-05-15 12:23:56.015527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.479 [2024-05-15 12:23:56.089732] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.479 [2024-05-15 12:23:56.089769] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.479 [2024-05-15 12:23:56.089778] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.479 [2024-05-15 12:23:56.089786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.479 [2024-05-15 12:23:56.089793] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.479 [2024-05-15 12:23:56.089896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.479 [2024-05-15 12:23:56.089979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.479 [2024-05-15 12:23:56.090089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.479 [2024-05-15 12:23:56.090090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:28.479 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:28.479 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:22:28.479 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.479 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:28.479 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.480 [2024-05-15 12:23:56.807109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.480 12:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.480 Malloc1 00:22:28.480 [2024-05-15 12:23:56.917567] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:28.480 [2024-05-15 12:23:56.917814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.480 Malloc2 00:22:28.480 Malloc3 00:22:28.739 Malloc4 00:22:28.739 Malloc5 00:22:28.739 Malloc6 00:22:28.739 Malloc7 00:22:28.739 Malloc8 00:22:28.739 Malloc9 00:22:28.998 Malloc10 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2195903 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2195903 /var/tmp/bdevperf.sock 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 2195903 ']' 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.998 { 00:22:28.998 "params": { 00:22:28.998 "name": "Nvme$subsystem", 00:22:28.998 "trtype": "$TEST_TRANSPORT", 00:22:28.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.998 "adrfam": "ipv4", 00:22:28.998 "trsvcid": "$NVMF_PORT", 00:22:28.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.998 "hdgst": ${hdgst:-false}, 00:22:28.998 "ddgst": ${ddgst:-false} 00:22:28.998 }, 00:22:28.998 "method": "bdev_nvme_attach_controller" 00:22:28.998 } 00:22:28.998 EOF 00:22:28.998 )") 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.998 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 [2024-05-15 12:23:57.402625] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:22:28.999 [2024-05-15 12:23:57.402677] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.999 { 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme$subsystem", 00:22:28.999 "trtype": "$TEST_TRANSPORT", 00:22:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "$NVMF_PORT", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.999 "hdgst": ${hdgst:-false}, 00:22:28.999 "ddgst": ${ddgst:-false} 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 } 00:22:28.999 EOF 00:22:28.999 )") 00:22:28.999 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:28.999 12:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme1", 00:22:28.999 "trtype": "tcp", 00:22:28.999 "traddr": "10.0.0.2", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "4420", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.999 "hdgst": false, 00:22:28.999 "ddgst": false 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 },{ 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme2", 00:22:28.999 "trtype": "tcp", 00:22:28.999 "traddr": "10.0.0.2", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "4420", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:28.999 "hdgst": false, 00:22:28.999 "ddgst": false 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 },{ 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme3", 00:22:28.999 "trtype": "tcp", 00:22:28.999 "traddr": "10.0.0.2", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "4420", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:28.999 "hdgst": false, 00:22:28.999 "ddgst": false 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 },{ 00:22:28.999 "params": { 00:22:28.999 "name": "Nvme4", 00:22:28.999 "trtype": "tcp", 00:22:28.999 "traddr": "10.0.0.2", 00:22:28.999 "adrfam": "ipv4", 00:22:28.999 "trsvcid": "4420", 00:22:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:28.999 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:28.999 "hdgst": false, 00:22:28.999 "ddgst": false 00:22:28.999 }, 00:22:28.999 "method": "bdev_nvme_attach_controller" 00:22:28.999 },{ 00:22:28.999 "params": { 00:22:29.000 "name": "Nvme5", 00:22:29.000 "trtype": "tcp", 00:22:29.000 "traddr": "10.0.0.2", 00:22:29.000 "adrfam": "ipv4", 00:22:29.000 "trsvcid": "4420", 00:22:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:29.000 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:29.000 "hdgst": false, 00:22:29.000 "ddgst": false 00:22:29.000 }, 00:22:29.000 "method": "bdev_nvme_attach_controller" 00:22:29.000 },{ 00:22:29.000 "params": { 00:22:29.000 "name": "Nvme6", 00:22:29.000 "trtype": "tcp", 00:22:29.000 "traddr": "10.0.0.2", 00:22:29.000 "adrfam": "ipv4", 00:22:29.000 "trsvcid": "4420", 00:22:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:29.000 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:29.000 "hdgst": false, 00:22:29.000 "ddgst": false 00:22:29.000 }, 00:22:29.000 "method": "bdev_nvme_attach_controller" 00:22:29.000 },{ 00:22:29.000 "params": { 00:22:29.000 "name": "Nvme7", 00:22:29.000 "trtype": "tcp", 00:22:29.000 "traddr": "10.0.0.2", 00:22:29.000 "adrfam": "ipv4", 00:22:29.000 "trsvcid": "4420", 00:22:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:29.000 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:29.000 "hdgst": false, 00:22:29.000 "ddgst": false 00:22:29.000 }, 00:22:29.000 "method": "bdev_nvme_attach_controller" 00:22:29.000 },{ 00:22:29.000 "params": { 00:22:29.000 "name": "Nvme8", 00:22:29.000 "trtype": "tcp", 00:22:29.000 "traddr": "10.0.0.2", 00:22:29.000 "adrfam": "ipv4", 00:22:29.000 "trsvcid": "4420", 00:22:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:29.000 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:29.000 "hdgst": false, 00:22:29.000 "ddgst": false 00:22:29.000 }, 00:22:29.000 "method": "bdev_nvme_attach_controller" 00:22:29.000 },{ 00:22:29.000 "params": { 00:22:29.000 "name": "Nvme9", 00:22:29.000 "trtype": "tcp", 00:22:29.000 "traddr": "10.0.0.2", 00:22:29.000 "adrfam": "ipv4", 00:22:29.000 "trsvcid": "4420", 00:22:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:29.000 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:29.000 "hdgst": false, 00:22:29.000 "ddgst": false 00:22:29.000 }, 00:22:29.000 "method": "bdev_nvme_attach_controller" 00:22:29.000 },{ 00:22:29.000 "params": { 00:22:29.000 "name": "Nvme10", 00:22:29.000 "trtype": "tcp", 00:22:29.000 "traddr": "10.0.0.2", 00:22:29.000 "adrfam": "ipv4", 00:22:29.000 "trsvcid": "4420", 00:22:29.000 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:29.000 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:29.000 "hdgst": false, 00:22:29.000 "ddgst": false 00:22:29.000 }, 00:22:29.000 "method": "bdev_nvme_attach_controller" 00:22:29.000 }' 00:22:29.000 [2024-05-15 12:23:57.474655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.258 [2024-05-15 12:23:57.544723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2195903 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:30.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2195903 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:30.692 12:23:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:31.628 12:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2195600 00:22:31.628 12:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:31.628 12:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:31.628 12:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.628 { 00:22:31.628 "params": { 00:22:31.628 "name": "Nvme$subsystem", 00:22:31.628 "trtype": "$TEST_TRANSPORT", 00:22:31.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.628 "adrfam": "ipv4", 00:22:31.628 "trsvcid": "$NVMF_PORT", 00:22:31.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.628 "hdgst": ${hdgst:-false}, 00:22:31.628 "ddgst": ${ddgst:-false} 00:22:31.628 }, 00:22:31.628 "method": "bdev_nvme_attach_controller" 00:22:31.628 } 00:22:31.628 EOF 00:22:31.628 )") 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.628 { 00:22:31.628 "params": { 00:22:31.628 "name": "Nvme$subsystem", 00:22:31.628 "trtype": "$TEST_TRANSPORT", 00:22:31.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.628 "adrfam": "ipv4", 00:22:31.628 "trsvcid": "$NVMF_PORT", 00:22:31.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.628 "hdgst": ${hdgst:-false}, 00:22:31.628 "ddgst": ${ddgst:-false} 00:22:31.628 }, 00:22:31.628 "method": "bdev_nvme_attach_controller" 00:22:31.628 } 00:22:31.628 EOF 00:22:31.628 )") 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.628 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.628 { 00:22:31.628 "params": { 00:22:31.628 "name": "Nvme$subsystem", 00:22:31.628 "trtype": "$TEST_TRANSPORT", 00:22:31.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.628 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "$NVMF_PORT", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.629 "hdgst": ${hdgst:-false}, 00:22:31.629 "ddgst": ${ddgst:-false} 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 } 00:22:31.629 EOF 00:22:31.629 )") 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.629 { 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme$subsystem", 00:22:31.629 "trtype": "$TEST_TRANSPORT", 00:22:31.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "$NVMF_PORT", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.629 "hdgst": ${hdgst:-false}, 00:22:31.629 "ddgst": ${ddgst:-false} 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 } 00:22:31.629 EOF 00:22:31.629 )") 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.629 { 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme$subsystem", 00:22:31.629 "trtype": "$TEST_TRANSPORT", 00:22:31.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "$NVMF_PORT", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.629 "hdgst": ${hdgst:-false}, 00:22:31.629 "ddgst": ${ddgst:-false} 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 } 00:22:31.629 EOF 00:22:31.629 )") 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.629 { 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme$subsystem", 00:22:31.629 "trtype": "$TEST_TRANSPORT", 00:22:31.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "$NVMF_PORT", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.629 "hdgst": ${hdgst:-false}, 00:22:31.629 "ddgst": ${ddgst:-false} 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 } 00:22:31.629 EOF 00:22:31.629 )") 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.629 [2024-05-15 12:24:00.047000] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:22:31.629 [2024-05-15 12:24:00.047055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196454 ] 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.629 { 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme$subsystem", 00:22:31.629 "trtype": "$TEST_TRANSPORT", 00:22:31.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "$NVMF_PORT", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.629 "hdgst": ${hdgst:-false}, 00:22:31.629 "ddgst": ${ddgst:-false} 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 } 00:22:31.629 EOF 00:22:31.629 )") 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.629 { 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme$subsystem", 00:22:31.629 "trtype": "$TEST_TRANSPORT", 00:22:31.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "$NVMF_PORT", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.629 "hdgst": ${hdgst:-false}, 00:22:31.629 "ddgst": ${ddgst:-false} 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 } 00:22:31.629 EOF 00:22:31.629 )") 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.629 { 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme$subsystem", 00:22:31.629 "trtype": "$TEST_TRANSPORT", 00:22:31.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "$NVMF_PORT", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.629 "hdgst": ${hdgst:-false}, 00:22:31.629 "ddgst": ${ddgst:-false} 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 } 00:22:31.629 EOF 00:22:31.629 )") 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.629 { 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme$subsystem", 00:22:31.629 "trtype": "$TEST_TRANSPORT", 00:22:31.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "$NVMF_PORT", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.629 "hdgst": ${hdgst:-false}, 00:22:31.629 "ddgst": ${ddgst:-false} 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 } 00:22:31.629 EOF 00:22:31.629 )") 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:31.629 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:31.629 12:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme1", 00:22:31.629 "trtype": "tcp", 00:22:31.629 "traddr": "10.0.0.2", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "4420", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.629 "hdgst": false, 00:22:31.629 "ddgst": false 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 },{ 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme2", 00:22:31.629 "trtype": "tcp", 00:22:31.629 "traddr": "10.0.0.2", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "4420", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.629 "hdgst": false, 00:22:31.629 "ddgst": false 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 },{ 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme3", 00:22:31.629 "trtype": "tcp", 00:22:31.629 "traddr": "10.0.0.2", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "4420", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:31.629 "hdgst": false, 00:22:31.629 "ddgst": false 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 },{ 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme4", 00:22:31.629 "trtype": "tcp", 00:22:31.629 "traddr": "10.0.0.2", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "4420", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:31.629 "hdgst": false, 00:22:31.629 "ddgst": false 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 },{ 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme5", 00:22:31.629 "trtype": "tcp", 00:22:31.629 "traddr": "10.0.0.2", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "4420", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:31.629 "hdgst": false, 00:22:31.629 "ddgst": false 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 },{ 00:22:31.629 "params": { 00:22:31.629 "name": "Nvme6", 00:22:31.629 "trtype": "tcp", 00:22:31.629 "traddr": "10.0.0.2", 00:22:31.629 "adrfam": "ipv4", 00:22:31.629 "trsvcid": "4420", 00:22:31.629 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:31.629 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:31.629 "hdgst": false, 00:22:31.629 "ddgst": false 00:22:31.629 }, 00:22:31.629 "method": "bdev_nvme_attach_controller" 00:22:31.629 },{ 00:22:31.629 "params": { 00:22:31.630 "name": "Nvme7", 00:22:31.630 "trtype": "tcp", 00:22:31.630 "traddr": "10.0.0.2", 00:22:31.630 "adrfam": "ipv4", 00:22:31.630 "trsvcid": "4420", 00:22:31.630 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:31.630 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:31.630 "hdgst": false, 00:22:31.630 "ddgst": false 00:22:31.630 }, 00:22:31.630 "method": "bdev_nvme_attach_controller" 00:22:31.630 },{ 00:22:31.630 "params": { 00:22:31.630 "name": "Nvme8", 00:22:31.630 "trtype": "tcp", 00:22:31.630 "traddr": "10.0.0.2", 00:22:31.630 "adrfam": "ipv4", 00:22:31.630 "trsvcid": "4420", 00:22:31.630 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:31.630 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:31.630 "hdgst": false, 00:22:31.630 "ddgst": false 00:22:31.630 }, 00:22:31.630 "method": "bdev_nvme_attach_controller" 00:22:31.630 },{ 00:22:31.630 "params": { 00:22:31.630 "name": "Nvme9", 00:22:31.630 "trtype": "tcp", 00:22:31.630 "traddr": "10.0.0.2", 00:22:31.630 "adrfam": "ipv4", 00:22:31.630 "trsvcid": "4420", 00:22:31.630 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:31.630 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:31.630 "hdgst": false, 00:22:31.630 "ddgst": false 00:22:31.630 }, 00:22:31.630 "method": "bdev_nvme_attach_controller" 00:22:31.630 },{ 00:22:31.630 "params": { 00:22:31.630 "name": "Nvme10", 00:22:31.630 "trtype": "tcp", 00:22:31.630 "traddr": "10.0.0.2", 00:22:31.630 "adrfam": "ipv4", 00:22:31.630 "trsvcid": "4420", 00:22:31.630 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:31.630 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:31.630 "hdgst": false, 00:22:31.630 "ddgst": false 00:22:31.630 }, 00:22:31.630 "method": "bdev_nvme_attach_controller" 00:22:31.630 }' 00:22:31.630 [2024-05-15 12:24:00.120114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.888 [2024-05-15 12:24:00.191369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.824 Running I/O for 1 seconds... 00:22:34.200 00:22:34.200 Latency(us) 00:22:34.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.200 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme1n1 : 1.14 278.52 17.41 0.00 0.00 227165.75 20656.95 203004.31 00:22:34.200 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme2n1 : 1.14 223.64 13.98 0.00 0.00 280165.38 19188.94 255013.68 00:22:34.200 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme3n1 : 1.07 298.87 18.68 0.00 0.00 206379.09 17511.22 202165.45 00:22:34.200 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme4n1 : 1.05 243.70 15.23 0.00 0.00 249219.28 19188.94 213909.50 00:22:34.200 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme5n1 : 1.15 277.35 17.33 0.00 0.00 216793.09 18979.23 221459.25 00:22:34.200 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme6n1 : 1.15 277.59 17.35 0.00 0.00 213168.78 18664.65 229847.86 00:22:34.200 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme7n1 : 1.14 281.90 17.62 0.00 0.00 207220.57 20447.23 213070.64 00:22:34.200 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme8n1 : 1.13 283.13 17.70 0.00 0.00 203211.57 18874.37 248302.80 00:22:34.200 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme9n1 : 1.16 330.42 20.65 0.00 0.00 172237.35 14784.92 194615.71 00:22:34.200 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.200 Verification LBA range: start 0x0 length 0x400 00:22:34.200 Nvme10n1 : 1.18 326.73 20.42 0.00 0.00 172041.63 11377.05 201326.59 00:22:34.200 =================================================================================================================== 00:22:34.200 Total : 2821.87 176.37 0.00 0.00 211048.06 11377.05 255013.68 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.459 rmmod nvme_tcp 00:22:34.459 rmmod nvme_fabrics 00:22:34.459 rmmod nvme_keyring 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2195600 ']' 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2195600 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' -z 2195600 ']' 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # kill -0 2195600 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # uname 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2195600 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2195600' 00:22:34.459 killing process with pid 2195600 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # kill 2195600 00:22:34.459 [2024-05-15 12:24:02.891684] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:34.459 12:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # wait 2195600 00:22:35.036 12:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.036 12:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.036 12:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.036 12:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.036 12:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.036 12:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.036 12:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.036 12:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.940 00:22:36.940 real 0m16.511s 00:22:36.940 user 0m34.042s 00:22:36.940 sys 0m7.084s 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:36.940 ************************************ 00:22:36.940 END TEST nvmf_shutdown_tc1 00:22:36.940 ************************************ 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.940 ************************************ 00:22:36.940 START TEST nvmf_shutdown_tc2 00:22:36.940 ************************************ 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc2 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.940 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:37.200 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:37.200 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:37.200 Found net devices under 0000:af:00.0: cvl_0_0 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:37.200 Found net devices under 0000:af:00.1: cvl_0_1 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.200 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:22:37.459 00:22:37.459 --- 10.0.0.2 ping statistics --- 00:22:37.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.459 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:37.459 00:22:37.459 --- 10.0.0.1 ping statistics --- 00:22:37.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.459 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2198086 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2198086 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2198086 ']' 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:37.459 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.460 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:37.460 12:24:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.460 [2024-05-15 12:24:05.887096] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:22:37.460 [2024-05-15 12:24:05.887143] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.460 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.460 [2024-05-15 12:24:05.962102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.718 [2024-05-15 12:24:06.037352] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.718 [2024-05-15 12:24:06.037389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.718 [2024-05-15 12:24:06.037399] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.718 [2024-05-15 12:24:06.037408] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.718 [2024-05-15 12:24:06.037415] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.718 [2024-05-15 12:24:06.037529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.718 [2024-05-15 12:24:06.037619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.718 [2024-05-15 12:24:06.037727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.718 [2024-05-15 12:24:06.037728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.286 [2024-05-15 12:24:06.734895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.286 12:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.544 Malloc1 00:22:38.544 [2024-05-15 12:24:06.841310] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:38.545 [2024-05-15 12:24:06.841554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.545 Malloc2 00:22:38.545 Malloc3 00:22:38.545 Malloc4 00:22:38.545 Malloc5 00:22:38.545 Malloc6 00:22:38.545 Malloc7 00:22:38.804 Malloc8 00:22:38.804 Malloc9 00:22:38.804 Malloc10 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2198388 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2198388 /var/tmp/bdevperf.sock 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2198388 ']' 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.804 { 00:22:38.804 "params": { 00:22:38.804 "name": "Nvme$subsystem", 00:22:38.804 "trtype": "$TEST_TRANSPORT", 00:22:38.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.804 "adrfam": "ipv4", 00:22:38.804 "trsvcid": "$NVMF_PORT", 00:22:38.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.804 "hdgst": ${hdgst:-false}, 00:22:38.804 "ddgst": ${ddgst:-false} 00:22:38.804 }, 00:22:38.804 "method": "bdev_nvme_attach_controller" 00:22:38.804 } 00:22:38.804 EOF 00:22:38.804 )") 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.804 { 00:22:38.804 "params": { 00:22:38.804 "name": "Nvme$subsystem", 00:22:38.804 "trtype": "$TEST_TRANSPORT", 00:22:38.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.804 "adrfam": "ipv4", 00:22:38.804 "trsvcid": "$NVMF_PORT", 00:22:38.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.804 "hdgst": ${hdgst:-false}, 00:22:38.804 "ddgst": ${ddgst:-false} 00:22:38.804 }, 00:22:38.804 "method": "bdev_nvme_attach_controller" 00:22:38.804 } 00:22:38.804 EOF 00:22:38.804 )") 00:22:38.804 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.805 { 00:22:38.805 "params": { 00:22:38.805 "name": "Nvme$subsystem", 00:22:38.805 "trtype": "$TEST_TRANSPORT", 00:22:38.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.805 "adrfam": "ipv4", 00:22:38.805 "trsvcid": "$NVMF_PORT", 00:22:38.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.805 "hdgst": ${hdgst:-false}, 00:22:38.805 "ddgst": ${ddgst:-false} 00:22:38.805 }, 00:22:38.805 "method": "bdev_nvme_attach_controller" 00:22:38.805 } 00:22:38.805 EOF 00:22:38.805 )") 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.805 { 00:22:38.805 "params": { 00:22:38.805 "name": "Nvme$subsystem", 00:22:38.805 "trtype": "$TEST_TRANSPORT", 00:22:38.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.805 "adrfam": "ipv4", 00:22:38.805 "trsvcid": "$NVMF_PORT", 00:22:38.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.805 "hdgst": ${hdgst:-false}, 00:22:38.805 "ddgst": ${ddgst:-false} 00:22:38.805 }, 00:22:38.805 "method": "bdev_nvme_attach_controller" 00:22:38.805 } 00:22:38.805 EOF 00:22:38.805 )") 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.805 { 00:22:38.805 "params": { 00:22:38.805 "name": "Nvme$subsystem", 00:22:38.805 "trtype": "$TEST_TRANSPORT", 00:22:38.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.805 "adrfam": "ipv4", 00:22:38.805 "trsvcid": "$NVMF_PORT", 00:22:38.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.805 "hdgst": ${hdgst:-false}, 00:22:38.805 "ddgst": ${ddgst:-false} 00:22:38.805 }, 00:22:38.805 "method": "bdev_nvme_attach_controller" 00:22:38.805 } 00:22:38.805 EOF 00:22:38.805 )") 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.805 { 00:22:38.805 "params": { 00:22:38.805 "name": "Nvme$subsystem", 00:22:38.805 "trtype": "$TEST_TRANSPORT", 00:22:38.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.805 "adrfam": "ipv4", 00:22:38.805 "trsvcid": "$NVMF_PORT", 00:22:38.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.805 "hdgst": ${hdgst:-false}, 00:22:38.805 "ddgst": ${ddgst:-false} 00:22:38.805 }, 00:22:38.805 "method": "bdev_nvme_attach_controller" 00:22:38.805 } 00:22:38.805 EOF 00:22:38.805 )") 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:38.805 [2024-05-15 12:24:07.322632] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:22:38.805 [2024-05-15 12:24:07.322687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198388 ] 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.805 { 00:22:38.805 "params": { 00:22:38.805 "name": "Nvme$subsystem", 00:22:38.805 "trtype": "$TEST_TRANSPORT", 00:22:38.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.805 "adrfam": "ipv4", 00:22:38.805 "trsvcid": "$NVMF_PORT", 00:22:38.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.805 "hdgst": ${hdgst:-false}, 00:22:38.805 "ddgst": ${ddgst:-false} 00:22:38.805 }, 00:22:38.805 "method": "bdev_nvme_attach_controller" 00:22:38.805 } 00:22:38.805 EOF 00:22:38.805 )") 00:22:38.805 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.064 { 00:22:39.064 "params": { 00:22:39.064 "name": "Nvme$subsystem", 00:22:39.064 "trtype": "$TEST_TRANSPORT", 00:22:39.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.064 "adrfam": "ipv4", 00:22:39.064 "trsvcid": "$NVMF_PORT", 00:22:39.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.064 "hdgst": ${hdgst:-false}, 00:22:39.064 "ddgst": ${ddgst:-false} 00:22:39.064 }, 00:22:39.064 "method": "bdev_nvme_attach_controller" 00:22:39.064 } 00:22:39.064 EOF 00:22:39.064 )") 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.064 { 00:22:39.064 "params": { 00:22:39.064 "name": "Nvme$subsystem", 00:22:39.064 "trtype": "$TEST_TRANSPORT", 00:22:39.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.064 "adrfam": "ipv4", 00:22:39.064 "trsvcid": "$NVMF_PORT", 00:22:39.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.064 "hdgst": ${hdgst:-false}, 00:22:39.064 "ddgst": ${ddgst:-false} 00:22:39.064 }, 00:22:39.064 "method": "bdev_nvme_attach_controller" 00:22:39.064 } 00:22:39.064 EOF 00:22:39.064 )") 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.064 { 00:22:39.064 "params": { 00:22:39.064 "name": "Nvme$subsystem", 00:22:39.064 "trtype": "$TEST_TRANSPORT", 00:22:39.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.064 "adrfam": "ipv4", 00:22:39.064 "trsvcid": "$NVMF_PORT", 00:22:39.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.064 "hdgst": ${hdgst:-false}, 00:22:39.064 "ddgst": ${ddgst:-false} 00:22:39.064 }, 00:22:39.064 "method": "bdev_nvme_attach_controller" 00:22:39.064 } 00:22:39.064 EOF 00:22:39.064 )") 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:39.064 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:39.064 12:24:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:39.064 "params": { 00:22:39.064 "name": "Nvme1", 00:22:39.064 "trtype": "tcp", 00:22:39.064 "traddr": "10.0.0.2", 00:22:39.064 "adrfam": "ipv4", 00:22:39.064 "trsvcid": "4420", 00:22:39.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.064 "hdgst": false, 00:22:39.064 "ddgst": false 00:22:39.064 }, 00:22:39.064 "method": "bdev_nvme_attach_controller" 00:22:39.064 },{ 00:22:39.064 "params": { 00:22:39.064 "name": "Nvme2", 00:22:39.064 "trtype": "tcp", 00:22:39.064 "traddr": "10.0.0.2", 00:22:39.064 "adrfam": "ipv4", 00:22:39.064 "trsvcid": "4420", 00:22:39.064 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:39.064 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:39.064 "hdgst": false, 00:22:39.064 "ddgst": false 00:22:39.064 }, 00:22:39.064 "method": "bdev_nvme_attach_controller" 00:22:39.064 },{ 00:22:39.064 "params": { 00:22:39.064 "name": "Nvme3", 00:22:39.064 "trtype": "tcp", 00:22:39.064 "traddr": "10.0.0.2", 00:22:39.064 "adrfam": "ipv4", 00:22:39.064 "trsvcid": "4420", 00:22:39.064 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:39.064 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:39.064 "hdgst": false, 00:22:39.064 "ddgst": false 00:22:39.064 }, 00:22:39.064 "method": "bdev_nvme_attach_controller" 00:22:39.064 },{ 00:22:39.064 "params": { 00:22:39.064 "name": "Nvme4", 00:22:39.064 "trtype": "tcp", 00:22:39.064 "traddr": "10.0.0.2", 00:22:39.064 "adrfam": "ipv4", 00:22:39.064 "trsvcid": "4420", 00:22:39.064 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:39.064 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:39.064 "hdgst": false, 00:22:39.064 "ddgst": false 00:22:39.064 }, 00:22:39.064 "method": "bdev_nvme_attach_controller" 00:22:39.064 },{ 00:22:39.064 "params": { 00:22:39.064 "name": "Nvme5", 00:22:39.064 "trtype": "tcp", 00:22:39.064 "traddr": "10.0.0.2", 00:22:39.064 "adrfam": "ipv4", 00:22:39.064 "trsvcid": "4420", 00:22:39.064 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:39.064 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:39.064 "hdgst": false, 00:22:39.064 "ddgst": false 00:22:39.064 }, 00:22:39.064 "method": "bdev_nvme_attach_controller" 00:22:39.064 },{ 00:22:39.064 "params": { 00:22:39.065 "name": "Nvme6", 00:22:39.065 "trtype": "tcp", 00:22:39.065 "traddr": "10.0.0.2", 00:22:39.065 "adrfam": "ipv4", 00:22:39.065 "trsvcid": "4420", 00:22:39.065 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:39.065 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:39.065 "hdgst": false, 00:22:39.065 "ddgst": false 00:22:39.065 }, 00:22:39.065 "method": "bdev_nvme_attach_controller" 00:22:39.065 },{ 00:22:39.065 "params": { 00:22:39.065 "name": "Nvme7", 00:22:39.065 "trtype": "tcp", 00:22:39.065 "traddr": "10.0.0.2", 00:22:39.065 "adrfam": "ipv4", 00:22:39.065 "trsvcid": "4420", 00:22:39.065 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:39.065 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:39.065 "hdgst": false, 00:22:39.065 "ddgst": false 00:22:39.065 }, 00:22:39.065 "method": "bdev_nvme_attach_controller" 00:22:39.065 },{ 00:22:39.065 "params": { 00:22:39.065 "name": "Nvme8", 00:22:39.065 "trtype": "tcp", 00:22:39.065 "traddr": "10.0.0.2", 00:22:39.065 "adrfam": "ipv4", 00:22:39.065 "trsvcid": "4420", 00:22:39.065 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:39.065 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:39.065 "hdgst": false, 00:22:39.065 "ddgst": false 00:22:39.065 }, 00:22:39.065 "method": "bdev_nvme_attach_controller" 00:22:39.065 },{ 00:22:39.065 "params": { 00:22:39.065 "name": "Nvme9", 00:22:39.065 "trtype": "tcp", 00:22:39.065 "traddr": "10.0.0.2", 00:22:39.065 "adrfam": "ipv4", 00:22:39.065 "trsvcid": "4420", 00:22:39.065 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:39.065 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:39.065 "hdgst": false, 00:22:39.065 "ddgst": false 00:22:39.065 }, 00:22:39.065 "method": "bdev_nvme_attach_controller" 00:22:39.065 },{ 00:22:39.065 "params": { 00:22:39.065 "name": "Nvme10", 00:22:39.065 "trtype": "tcp", 00:22:39.065 "traddr": "10.0.0.2", 00:22:39.065 "adrfam": "ipv4", 00:22:39.065 "trsvcid": "4420", 00:22:39.065 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:39.065 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:39.065 "hdgst": false, 00:22:39.065 "ddgst": false 00:22:39.065 }, 00:22:39.065 "method": "bdev_nvme_attach_controller" 00:22:39.065 }' 00:22:39.065 [2024-05-15 12:24:07.395080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.065 [2024-05-15 12:24:07.466352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.967 Running I/O for 10 seconds... 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:40.967 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:41.226 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:41.485 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:41.485 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:41.485 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:41.485 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:41.485 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.485 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2198388 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 2198388 ']' 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 2198388 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2198388 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2198388' 00:22:41.486 killing process with pid 2198388 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 2198388 00:22:41.486 12:24:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 2198388 00:22:41.745 Received shutdown signal, test time was about 0.935937 seconds 00:22:41.745 00:22:41.745 Latency(us) 00:22:41.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.745 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme1n1 : 0.90 285.09 17.82 0.00 0.00 222092.70 19084.08 193776.84 00:22:41.745 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme2n1 : 0.90 283.18 17.70 0.00 0.00 219746.30 19084.08 212231.78 00:22:41.745 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme3n1 : 0.91 280.94 17.56 0.00 0.00 218054.66 20342.37 198810.01 00:22:41.745 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme4n1 : 0.89 288.48 18.03 0.00 0.00 208024.37 18454.94 210554.06 00:22:41.745 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme5n1 : 0.94 273.71 17.11 0.00 0.00 216558.80 16986.93 224814.69 00:22:41.745 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme6n1 : 0.92 277.40 17.34 0.00 0.00 209734.25 19503.51 216426.09 00:22:41.745 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme7n1 : 0.89 293.11 18.32 0.00 0.00 193336.67 5636.10 208037.48 00:22:41.745 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme8n1 : 0.91 281.90 17.62 0.00 0.00 198468.61 25585.25 209715.20 00:22:41.745 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme9n1 : 0.93 274.84 17.18 0.00 0.00 200697.24 20656.95 211392.92 00:22:41.745 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.745 Verification LBA range: start 0x0 length 0x400 00:22:41.745 Nvme10n1 : 0.93 274.14 17.13 0.00 0.00 197604.15 15623.78 226492.42 00:22:41.745 =================================================================================================================== 00:22:41.745 Total : 2812.81 175.80 0.00 0.00 208402.35 5636.10 226492.42 00:22:41.745 12:24:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2198086 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:43.125 rmmod nvme_tcp 00:22:43.125 rmmod nvme_fabrics 00:22:43.125 rmmod nvme_keyring 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2198086 ']' 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2198086 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 2198086 ']' 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 2198086 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2198086 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2198086' 00:22:43.125 killing process with pid 2198086 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 2198086 00:22:43.125 [2024-05-15 12:24:11.363288] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:43.125 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 2198086 00:22:43.385 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.385 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:43.385 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:43.385 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.385 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:43.385 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.385 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.385 12:24:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.917 00:22:45.917 real 0m8.380s 00:22:45.917 user 0m25.370s 00:22:45.917 sys 0m1.752s 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.917 ************************************ 00:22:45.917 END TEST nvmf_shutdown_tc2 00:22:45.917 ************************************ 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:45.917 ************************************ 00:22:45.917 START TEST nvmf_shutdown_tc3 00:22:45.917 ************************************ 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc3 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:45.917 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.917 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:45.918 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:45.918 Found net devices under 0000:af:00.0: cvl_0_0 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:45.918 Found net devices under 0000:af:00.1: cvl_0_1 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.918 12:24:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:22:45.918 00:22:45.918 --- 10.0.0.2 ping statistics --- 00:22:45.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.918 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:22:45.918 00:22:45.918 --- 10.0.0.1 ping statistics --- 00:22:45.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.918 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2199688 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2199688 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 2199688 ']' 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:45.918 12:24:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.918 [2024-05-15 12:24:14.398385] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:22:45.918 [2024-05-15 12:24:14.398434] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.918 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.180 [2024-05-15 12:24:14.473708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.180 [2024-05-15 12:24:14.547508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.180 [2024-05-15 12:24:14.547544] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.180 [2024-05-15 12:24:14.547554] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.180 [2024-05-15 12:24:14.547562] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.180 [2024-05-15 12:24:14.547569] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.180 [2024-05-15 12:24:14.547671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.180 [2024-05-15 12:24:14.547755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.180 [2024-05-15 12:24:14.547865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.180 [2024-05-15 12:24:14.547866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.768 [2024-05-15 12:24:15.240943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:46.768 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:47.026 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.026 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:47.026 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.026 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:47.026 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:47.026 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.026 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.026 Malloc1 00:22:47.026 [2024-05-15 12:24:15.351404] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:47.026 [2024-05-15 12:24:15.351646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.026 Malloc2 00:22:47.026 Malloc3 00:22:47.026 Malloc4 00:22:47.026 Malloc5 00:22:47.026 Malloc6 00:22:47.284 Malloc7 00:22:47.284 Malloc8 00:22:47.284 Malloc9 00:22:47.284 Malloc10 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2200006 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2200006 /var/tmp/bdevperf.sock 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 2200006 ']' 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.284 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.284 { 00:22:47.284 "params": { 00:22:47.284 "name": "Nvme$subsystem", 00:22:47.285 "trtype": "$TEST_TRANSPORT", 00:22:47.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.285 "adrfam": "ipv4", 00:22:47.285 "trsvcid": "$NVMF_PORT", 00:22:47.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.285 "hdgst": ${hdgst:-false}, 00:22:47.285 "ddgst": ${ddgst:-false} 00:22:47.285 }, 00:22:47.285 "method": "bdev_nvme_attach_controller" 00:22:47.285 } 00:22:47.285 EOF 00:22:47.285 )") 00:22:47.285 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.285 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.285 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.285 { 00:22:47.285 "params": { 00:22:47.285 "name": "Nvme$subsystem", 00:22:47.285 "trtype": "$TEST_TRANSPORT", 00:22:47.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.285 "adrfam": "ipv4", 00:22:47.285 "trsvcid": "$NVMF_PORT", 00:22:47.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.285 "hdgst": ${hdgst:-false}, 00:22:47.285 "ddgst": ${ddgst:-false} 00:22:47.285 }, 00:22:47.285 "method": "bdev_nvme_attach_controller" 00:22:47.285 } 00:22:47.285 EOF 00:22:47.285 )") 00:22:47.285 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.285 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.285 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.285 { 00:22:47.285 "params": { 00:22:47.285 "name": "Nvme$subsystem", 00:22:47.285 "trtype": "$TEST_TRANSPORT", 00:22:47.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.285 "adrfam": "ipv4", 00:22:47.285 "trsvcid": "$NVMF_PORT", 00:22:47.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.285 "hdgst": ${hdgst:-false}, 00:22:47.285 "ddgst": ${ddgst:-false} 00:22:47.285 }, 00:22:47.285 "method": "bdev_nvme_attach_controller" 00:22:47.285 } 00:22:47.285 EOF 00:22:47.285 )") 00:22:47.285 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.543 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.543 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.543 { 00:22:47.543 "params": { 00:22:47.543 "name": "Nvme$subsystem", 00:22:47.543 "trtype": "$TEST_TRANSPORT", 00:22:47.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "$NVMF_PORT", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.544 "hdgst": ${hdgst:-false}, 00:22:47.544 "ddgst": ${ddgst:-false} 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 } 00:22:47.544 EOF 00:22:47.544 )") 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.544 { 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme$subsystem", 00:22:47.544 "trtype": "$TEST_TRANSPORT", 00:22:47.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "$NVMF_PORT", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.544 "hdgst": ${hdgst:-false}, 00:22:47.544 "ddgst": ${ddgst:-false} 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 } 00:22:47.544 EOF 00:22:47.544 )") 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.544 { 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme$subsystem", 00:22:47.544 "trtype": "$TEST_TRANSPORT", 00:22:47.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "$NVMF_PORT", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.544 "hdgst": ${hdgst:-false}, 00:22:47.544 "ddgst": ${ddgst:-false} 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 } 00:22:47.544 EOF 00:22:47.544 )") 00:22:47.544 [2024-05-15 12:24:15.835204] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:22:47.544 [2024-05-15 12:24:15.835257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200006 ] 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.544 { 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme$subsystem", 00:22:47.544 "trtype": "$TEST_TRANSPORT", 00:22:47.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "$NVMF_PORT", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.544 "hdgst": ${hdgst:-false}, 00:22:47.544 "ddgst": ${ddgst:-false} 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 } 00:22:47.544 EOF 00:22:47.544 )") 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.544 { 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme$subsystem", 00:22:47.544 "trtype": "$TEST_TRANSPORT", 00:22:47.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "$NVMF_PORT", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.544 "hdgst": ${hdgst:-false}, 00:22:47.544 "ddgst": ${ddgst:-false} 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 } 00:22:47.544 EOF 00:22:47.544 )") 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.544 { 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme$subsystem", 00:22:47.544 "trtype": "$TEST_TRANSPORT", 00:22:47.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "$NVMF_PORT", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.544 "hdgst": ${hdgst:-false}, 00:22:47.544 "ddgst": ${ddgst:-false} 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 } 00:22:47.544 EOF 00:22:47.544 )") 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.544 { 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme$subsystem", 00:22:47.544 "trtype": "$TEST_TRANSPORT", 00:22:47.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "$NVMF_PORT", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.544 "hdgst": ${hdgst:-false}, 00:22:47.544 "ddgst": ${ddgst:-false} 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 } 00:22:47.544 EOF 00:22:47.544 )") 00:22:47.544 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:47.544 12:24:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme1", 00:22:47.544 "trtype": "tcp", 00:22:47.544 "traddr": "10.0.0.2", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "4420", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.544 "hdgst": false, 00:22:47.544 "ddgst": false 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 },{ 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme2", 00:22:47.544 "trtype": "tcp", 00:22:47.544 "traddr": "10.0.0.2", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "4420", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:47.544 "hdgst": false, 00:22:47.544 "ddgst": false 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 },{ 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme3", 00:22:47.544 "trtype": "tcp", 00:22:47.544 "traddr": "10.0.0.2", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "4420", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:47.544 "hdgst": false, 00:22:47.544 "ddgst": false 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 },{ 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme4", 00:22:47.544 "trtype": "tcp", 00:22:47.544 "traddr": "10.0.0.2", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "4420", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:47.544 "hdgst": false, 00:22:47.544 "ddgst": false 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 },{ 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme5", 00:22:47.544 "trtype": "tcp", 00:22:47.544 "traddr": "10.0.0.2", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "4420", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:47.544 "hdgst": false, 00:22:47.544 "ddgst": false 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 },{ 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme6", 00:22:47.544 "trtype": "tcp", 00:22:47.544 "traddr": "10.0.0.2", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "4420", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:47.544 "hdgst": false, 00:22:47.544 "ddgst": false 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 },{ 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme7", 00:22:47.544 "trtype": "tcp", 00:22:47.544 "traddr": "10.0.0.2", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "4420", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:47.544 "hdgst": false, 00:22:47.544 "ddgst": false 00:22:47.544 }, 00:22:47.544 "method": "bdev_nvme_attach_controller" 00:22:47.544 },{ 00:22:47.544 "params": { 00:22:47.544 "name": "Nvme8", 00:22:47.544 "trtype": "tcp", 00:22:47.544 "traddr": "10.0.0.2", 00:22:47.544 "adrfam": "ipv4", 00:22:47.544 "trsvcid": "4420", 00:22:47.544 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:47.544 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:47.544 "hdgst": false, 00:22:47.544 "ddgst": false 00:22:47.544 }, 00:22:47.545 "method": "bdev_nvme_attach_controller" 00:22:47.545 },{ 00:22:47.545 "params": { 00:22:47.545 "name": "Nvme9", 00:22:47.545 "trtype": "tcp", 00:22:47.545 "traddr": "10.0.0.2", 00:22:47.545 "adrfam": "ipv4", 00:22:47.545 "trsvcid": "4420", 00:22:47.545 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:47.545 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:47.545 "hdgst": false, 00:22:47.545 "ddgst": false 00:22:47.545 }, 00:22:47.545 "method": "bdev_nvme_attach_controller" 00:22:47.545 },{ 00:22:47.545 "params": { 00:22:47.545 "name": "Nvme10", 00:22:47.545 "trtype": "tcp", 00:22:47.545 "traddr": "10.0.0.2", 00:22:47.545 "adrfam": "ipv4", 00:22:47.545 "trsvcid": "4420", 00:22:47.545 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:47.545 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:47.545 "hdgst": false, 00:22:47.545 "ddgst": false 00:22:47.545 }, 00:22:47.545 "method": "bdev_nvme_attach_controller" 00:22:47.545 }' 00:22:47.545 [2024-05-15 12:24:15.907593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.545 [2024-05-15 12:24:15.977973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.444 Running I/O for 10 seconds... 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=81 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 81 -ge 100 ']' 00:22:50.010 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2199688 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' -z 2199688 ']' 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # kill -0 2199688 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # uname 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2199688 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2199688' 00:22:50.276 killing process with pid 2199688 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # kill 2199688 00:22:50.276 [2024-05-15 12:24:18.780570] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:50.276 12:24:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # wait 2199688 00:22:50.276 [2024-05-15 12:24:18.780951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.780981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.780992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.276 [2024-05-15 12:24:18.781515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.781524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa3b0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.782996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.783005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.783014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.783022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113dc0 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.783996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.277 [2024-05-15 12:24:18.784182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.784554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fa850 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f56ae0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea9f0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.278 [2024-05-15 12:24:18.785590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.278 [2024-05-15 12:24:18.785599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b5250 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.278 [2024-05-15 12:24:18.785876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.785995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.786256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10facf0 is same with the state(5) to be set 00:22:50.279 [2024-05-15 12:24:18.787273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.279 [2024-05-15 12:24:18.787669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.279 [2024-05-15 12:24:18.787680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb190 is same with [2024-05-15 12:24:18.787767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:50.280 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb190 is same with the state(5) to be set 00:22:50.280 [2024-05-15 12:24:18.787789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.787985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.787996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb630 is same with the state(5) to be set 00:22:50.280 [2024-05-15 12:24:18.788114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fb630 is same with the state(5) to be set 00:22:50.280 [2024-05-15 12:24:18.788123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.280 [2024-05-15 12:24:18.788278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.280 [2024-05-15 12:24:18.788287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.788565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.788656] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ecc340 was disconnected and freed. reset controller. 00:22:50.281 [2024-05-15 12:24:18.788710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.788994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbad0 is same with the state(5) to be set 00:22:50.281 [2024-05-15 12:24:18.789687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.281 [2024-05-15 12:24:18.789711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.281 [2024-05-15 12:24:18.789727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.789976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.789992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:1[2024-05-15 12:24:18.790211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:1[2024-05-15 12:24:18.790236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 12:24:18.790248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with [2024-05-15 12:24:18.790259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:1the state(5) to be set 00:22:50.282 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with [2024-05-15 12:24:18.790270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:50.282 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:1[2024-05-15 12:24:18.790345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 12:24:18.790358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.282 [2024-05-15 12:24:18.790387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.282 [2024-05-15 12:24:18.790391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.282 [2024-05-15 12:24:18.790395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:1[2024-05-15 12:24:18.790413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with [2024-05-15 12:24:18.790478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:1the state(5) to be set 00:22:50.283 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with [2024-05-15 12:24:18.790546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:1the state(5) to be set 00:22:50.283 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with [2024-05-15 12:24:18.790557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:50.283 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:1[2024-05-15 12:24:18.790570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 12:24:18.790580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:1[2024-05-15 12:24:18.790655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 12:24:18.790667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:1[2024-05-15 12:24:18.790742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 12:24:18.790756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with [2024-05-15 12:24:18.790767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:1the state(5) to be set 00:22:50.283 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 [2024-05-15 12:24:18.790794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 [2024-05-15 12:24:18.790803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:1[2024-05-15 12:24:18.790812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 12:24:18.790823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbf70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.283 the state(5) to be set 00:22:50.283 [2024-05-15 12:24:18.790834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.790843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.284 [2024-05-15 12:24:18.790854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.790863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.284 [2024-05-15 12:24:18.790873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.790882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.284 [2024-05-15 12:24:18.790892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.790901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.284 [2024-05-15 12:24:18.790911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.790921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.284 [2024-05-15 12:24:18.790931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.790944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.284 [2024-05-15 12:24:18.790955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.790963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.284 [2024-05-15 12:24:18.790974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.790983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.284 [2024-05-15 12:24:18.790993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.284 [2024-05-15 12:24:18.791608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.791992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.284 [2024-05-15 12:24:18.792096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.285 [2024-05-15 12:24:18.792104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.285 [2024-05-15 12:24:18.792112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.285 [2024-05-15 12:24:18.792121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.285 [2024-05-15 12:24:18.792129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.285 [2024-05-15 12:24:18.792149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.285 [2024-05-15 12:24:18.792214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc410 is same with the state(5) to be set 00:22:50.285 [2024-05-15 12:24:18.792823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc8b0 is same with the state(5) to be set 00:22:50.285 [2024-05-15 12:24:18.792842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc8b0 is same with the state(5) to be set 00:22:50.556 [2024-05-15 12:24:18.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.805691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.556 [2024-05-15 12:24:18.805703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.805717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.556 [2024-05-15 12:24:18.805729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.805767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:50.556 [2024-05-15 12:24:18.805830] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ee6060 was disconnected and freed. reset controller. 00:22:50.556 [2024-05-15 12:24:18.806534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b4020 is same with the state(5) to be set 00:22:50.556 [2024-05-15 12:24:18.806696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f18100 is same with the state(5) to be set 00:22:50.556 [2024-05-15 12:24:18.806827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f0610 is same with the state(5) to be set 00:22:50.556 [2024-05-15 12:24:18.806959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.806985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.806997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.807009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.807021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.807034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.807046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.807058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05ee0 is same with the state(5) to be set 00:22:50.556 [2024-05-15 12:24:18.807084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f56ae0 (9): Bad file descriptor 00:22:50.556 [2024-05-15 12:24:18.807125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.807139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.807151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.807163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.807176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.807188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.807207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.807219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.556 [2024-05-15 12:24:18.807231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0df50 is same with the state(5) to be set 00:22:50.556 [2024-05-15 12:24:18.807254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eea9f0 (9): Bad file descriptor 00:22:50.556 [2024-05-15 12:24:18.807288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.556 [2024-05-15 12:24:18.807302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.807315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.557 [2024-05-15 12:24:18.807327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.807340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.557 [2024-05-15 12:24:18.807351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.807368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.557 [2024-05-15 12:24:18.807380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.807391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05a70 is same with the state(5) to be set 00:22:50.557 [2024-05-15 12:24:18.807413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b5250 (9): Bad file descriptor 00:22:50.557 [2024-05-15 12:24:18.807447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.557 [2024-05-15 12:24:18.807460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.807473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.557 [2024-05-15 12:24:18.807485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.807498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.557 [2024-05-15 12:24:18.807510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.807522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.557 [2024-05-15 12:24:18.807534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.807546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30660 is same with the state(5) to be set 00:22:50.557 [2024-05-15 12:24:18.808917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.808942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.808961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.808974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.808989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.557 [2024-05-15 12:24:18.809787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.557 [2024-05-15 12:24:18.809802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.809816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.809828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.809843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.809856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.809870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.809883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.809898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.809910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.809924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.809937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.809951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.809964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.809978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.809991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810758] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ed87b0 was disconnected and freed. reset controller. 00:22:50.558 [2024-05-15 12:24:18.810848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.810976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.810991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.811004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.811018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.811031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.558 [2024-05-15 12:24:18.811045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.558 [2024-05-15 12:24:18.811058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.811972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.811987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.812000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.812016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.812029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.812043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.812056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.812070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.812083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.812098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.812110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.812124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.812137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.812151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.812164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.559 [2024-05-15 12:24:18.812179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.559 [2024-05-15 12:24:18.812196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.812608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.812681] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ee4b60 was disconnected and freed. reset controller. 00:22:50.560 [2024-05-15 12:24:18.814072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.560 [2024-05-15 12:24:18.814557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.560 [2024-05-15 12:24:18.814572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.814983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.814995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.561 [2024-05-15 12:24:18.815667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.561 [2024-05-15 12:24:18.815680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.815694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.815707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.815721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.815734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.815748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.815761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.815775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.815788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.815802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.815815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.815830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.815842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.815918] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2014d10 was disconnected and freed. reset controller. 00:22:50.562 [2024-05-15 12:24:18.816000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:50.562 [2024-05-15 12:24:18.816041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f18100 (9): Bad file descriptor 00:22:50.562 [2024-05-15 12:24:18.816122] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:50.562 [2024-05-15 12:24:18.819691] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:50.562 [2024-05-15 12:24:18.819718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:50.562 [2024-05-15 12:24:18.819741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:50.562 [2024-05-15 12:24:18.819756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f05a70 (9): Bad file descriptor 00:22:50.562 [2024-05-15 12:24:18.819769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f0610 (9): Bad file descriptor 00:22:50.562 [2024-05-15 12:24:18.819794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b4020 (9): Bad file descriptor 00:22:50.562 [2024-05-15 12:24:18.819812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f05ee0 (9): Bad file descriptor 00:22:50.562 [2024-05-15 12:24:18.819833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0df50 (9): Bad file descriptor 00:22:50.562 [2024-05-15 12:24:18.819863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f30660 (9): Bad file descriptor 00:22:50.562 [2024-05-15 12:24:18.820510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:50.562 [2024-05-15 12:24:18.820945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.562 [2024-05-15 12:24:18.821415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.562 [2024-05-15 12:24:18.821430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f18100 with addr=10.0.0.2, port=4420 00:22:50.562 [2024-05-15 12:24:18.821441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f18100 is same with the state(5) to be set 00:22:50.562 [2024-05-15 12:24:18.821520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.821987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.821996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.822007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.822016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.822027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.822036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.562 [2024-05-15 12:24:18.822046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.562 [2024-05-15 12:24:18.822055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.822787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.822797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc70 is same with the state(5) to be set 00:22:50.563 [2024-05-15 12:24:18.823755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.823769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.823781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.823790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.823801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.563 [2024-05-15 12:24:18.823811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.563 [2024-05-15 12:24:18.823821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.823841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.823863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.823882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.823902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.823922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.823942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.823962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.823981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.823990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.564 [2024-05-15 12:24:18.824598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.564 [2024-05-15 12:24:18.824607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.824983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.824993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.825002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.825013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.825022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826452] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:50.565 [2024-05-15 12:24:18.826506] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:50.565 [2024-05-15 12:24:18.826771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.826987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.826996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.827007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.827016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.827026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.827035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.565 [2024-05-15 12:24:18.827046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.565 [2024-05-15 12:24:18.827055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.566 [2024-05-15 12:24:18.827853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.566 [2024-05-15 12:24:18.827864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.827873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.827883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.827892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.827903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.827912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.827922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.827931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.827941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.827951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.827961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.827971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.827982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.827991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.828001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.828010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.828021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.828030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.828040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.828049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.828059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20161d0 is same with the state(5) to be set 00:22:50.567 [2024-05-15 12:24:18.830077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:50.567 [2024-05-15 12:24:18.830101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.567 [2024-05-15 12:24:18.830113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:50.567 [2024-05-15 12:24:18.830125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:50.567 [2024-05-15 12:24:18.830621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.831087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.831100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f0610 with addr=10.0.0.2, port=4420 00:22:50.567 [2024-05-15 12:24:18.831110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f0610 is same with the state(5) to be set 00:22:50.567 [2024-05-15 12:24:18.831530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.831969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.831981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f05a70 with addr=10.0.0.2, port=4420 00:22:50.567 [2024-05-15 12:24:18.831990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05a70 is same with the state(5) to be set 00:22:50.567 [2024-05-15 12:24:18.832374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.832749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.832761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f05ee0 with addr=10.0.0.2, port=4420 00:22:50.567 [2024-05-15 12:24:18.832770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05ee0 is same with the state(5) to be set 00:22:50.567 [2024-05-15 12:24:18.832782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f18100 (9): Bad file descriptor 00:22:50.567 [2024-05-15 12:24:18.832823] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.567 [2024-05-15 12:24:18.833366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.833786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.833798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b4020 with addr=10.0.0.2, port=4420 00:22:50.567 [2024-05-15 12:24:18.833808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b4020 is same with the state(5) to be set 00:22:50.567 [2024-05-15 12:24:18.834178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.834629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.834641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eea9f0 with addr=10.0.0.2, port=4420 00:22:50.567 [2024-05-15 12:24:18.834650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea9f0 is same with the state(5) to be set 00:22:50.567 [2024-05-15 12:24:18.835086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.835521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.835533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b5250 with addr=10.0.0.2, port=4420 00:22:50.567 [2024-05-15 12:24:18.835542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b5250 is same with the state(5) to be set 00:22:50.567 [2024-05-15 12:24:18.835914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.836351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.567 [2024-05-15 12:24:18.836363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f56ae0 with addr=10.0.0.2, port=4420 00:22:50.567 [2024-05-15 12:24:18.836376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f56ae0 is same with the state(5) to be set 00:22:50.567 [2024-05-15 12:24:18.836388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f0610 (9): Bad file descriptor 00:22:50.567 [2024-05-15 12:24:18.836400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f05a70 (9): Bad file descriptor 00:22:50.567 [2024-05-15 12:24:18.836411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f05ee0 (9): Bad file descriptor 00:22:50.567 [2024-05-15 12:24:18.836422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:50.567 [2024-05-15 12:24:18.836431] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:50.567 [2024-05-15 12:24:18.836442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:50.567 [2024-05-15 12:24:18.836460] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.567 [2024-05-15 12:24:18.836473] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.567 [2024-05-15 12:24:18.836485] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.567 [2024-05-15 12:24:18.836973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.836987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.567 [2024-05-15 12:24:18.837217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.567 [2024-05-15 12:24:18.837227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.837984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.837995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.838004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.838015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.568 [2024-05-15 12:24:18.838024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.568 [2024-05-15 12:24:18.838034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.838251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.838261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2810680 is same with the state(5) to be set 00:22:50.569 [2024-05-15 12:24:18.839236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.569 [2024-05-15 12:24:18.839687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.569 [2024-05-15 12:24:18.839700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.839977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.839991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.570 [2024-05-15 12:24:18.840787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.570 [2024-05-15 12:24:18.840801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.571 [2024-05-15 12:24:18.840813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.571 [2024-05-15 12:24:18.840826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29b70b0 is same with the state(5) to be set 00:22:50.571 [2024-05-15 12:24:18.842653] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.571 [2024-05-15 12:24:18.842676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:50.571 task offset: 32768 on job bdev=Nvme3n1 fails 00:22:50.571 00:22:50.571 Latency(us) 00:22:50.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.571 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme1n1 ended in about 1.07 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme1n1 : 1.07 178.93 11.18 59.64 0.00 265598.77 20237.52 223136.97 00:22:50.571 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme2n1 ended in about 1.08 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme2n1 : 1.08 238.08 14.88 59.52 0.00 209128.33 21390.95 208876.34 00:22:50.571 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme3n1 ended in about 1.06 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme3n1 : 1.06 241.96 15.12 60.49 0.00 201929.03 19608.37 206359.76 00:22:50.571 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme4n1 ended in about 1.07 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme4n1 : 1.07 240.03 15.00 60.01 0.00 199844.17 19084.08 207198.62 00:22:50.571 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme5n1 ended in about 1.07 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme5n1 : 1.07 239.77 14.99 59.94 0.00 196320.46 19398.66 208037.48 00:22:50.571 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme6n1 ended in about 1.06 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme6n1 : 1.06 240.83 15.05 60.21 0.00 191613.83 22124.95 208037.48 00:22:50.571 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme7n1 ended in about 1.09 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme7n1 : 1.09 176.40 11.02 58.80 0.00 241247.23 24746.39 258369.13 00:22:50.571 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme8n1 ended in about 1.09 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme8n1 : 1.09 178.72 11.17 58.66 0.00 234481.03 20027.80 233203.30 00:22:50.571 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme9n1 ended in about 1.07 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme9n1 : 1.07 179.61 11.23 59.87 0.00 226983.73 12530.48 248302.80 00:22:50.571 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.571 Job: Nvme10n1 ended in about 1.08 seconds with error 00:22:50.571 Verification LBA range: start 0x0 length 0x400 00:22:50.571 Nvme10n1 : 1.08 178.06 11.13 59.35 0.00 224599.04 19818.09 224814.69 00:22:50.571 =================================================================================================================== 00:22:50.571 Total : 2092.38 130.77 596.49 0.00 217036.36 12530.48 258369.13 00:22:50.571 [2024-05-15 12:24:18.865436] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:50.571 [2024-05-15 12:24:18.865491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:50.571 [2024-05-15 12:24:18.865544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b4020 (9): Bad file descriptor 00:22:50.571 [2024-05-15 12:24:18.865561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eea9f0 (9): Bad file descriptor 00:22:50.571 [2024-05-15 12:24:18.865573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b5250 (9): Bad file descriptor 00:22:50.571 [2024-05-15 12:24:18.865585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f56ae0 (9): Bad file descriptor 00:22:50.571 [2024-05-15 12:24:18.865597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:50.571 [2024-05-15 12:24:18.865606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:50.571 [2024-05-15 12:24:18.865618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:50.571 [2024-05-15 12:24:18.865635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:50.571 [2024-05-15 12:24:18.865644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:50.571 [2024-05-15 12:24:18.865653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:50.571 [2024-05-15 12:24:18.865669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:50.571 [2024-05-15 12:24:18.865677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:50.571 [2024-05-15 12:24:18.865691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:50.571 [2024-05-15 12:24:18.865733] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.865747] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.865759] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.865774] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.865862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.571 [2024-05-15 12:24:18.865872] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.571 [2024-05-15 12:24:18.865880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.571 [2024-05-15 12:24:18.866428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.571 [2024-05-15 12:24:18.866884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.571 [2024-05-15 12:24:18.866899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f30660 with addr=10.0.0.2, port=4420 00:22:50.571 [2024-05-15 12:24:18.866913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30660 is same with the state(5) to be set 00:22:50.571 [2024-05-15 12:24:18.867400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.571 [2024-05-15 12:24:18.867767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.571 [2024-05-15 12:24:18.867779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0df50 with addr=10.0.0.2, port=4420 00:22:50.571 [2024-05-15 12:24:18.867789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0df50 is same with the state(5) to be set 00:22:50.571 [2024-05-15 12:24:18.867800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:50.571 [2024-05-15 12:24:18.867809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:50.571 [2024-05-15 12:24:18.867819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:50.571 [2024-05-15 12:24:18.867838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.571 [2024-05-15 12:24:18.867846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.571 [2024-05-15 12:24:18.867855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.571 [2024-05-15 12:24:18.867866] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:50.571 [2024-05-15 12:24:18.867875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:50.571 [2024-05-15 12:24:18.867884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:50.571 [2024-05-15 12:24:18.867895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:50.571 [2024-05-15 12:24:18.867903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:50.571 [2024-05-15 12:24:18.867912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:50.571 [2024-05-15 12:24:18.867957] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.867971] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.867982] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.867998] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.868010] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:50.571 [2024-05-15 12:24:18.868510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:50.571 [2024-05-15 12:24:18.868544] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.571 [2024-05-15 12:24:18.868553] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.571 [2024-05-15 12:24:18.868560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.571 [2024-05-15 12:24:18.868568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.571 [2024-05-15 12:24:18.868596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f30660 (9): Bad file descriptor 00:22:50.571 [2024-05-15 12:24:18.868610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0df50 (9): Bad file descriptor 00:22:50.571 [2024-05-15 12:24:18.868920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:50.571 [2024-05-15 12:24:18.868938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:50.571 [2024-05-15 12:24:18.869352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.571 [2024-05-15 12:24:18.869833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.571 [2024-05-15 12:24:18.869852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f18100 with addr=10.0.0.2, port=4420 00:22:50.571 [2024-05-15 12:24:18.869863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f18100 is same with the state(5) to be set 00:22:50.571 [2024-05-15 12:24:18.869873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:50.572 [2024-05-15 12:24:18.869882] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:50.572 [2024-05-15 12:24:18.869891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:50.572 [2024-05-15 12:24:18.869904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:50.572 [2024-05-15 12:24:18.869913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:50.572 [2024-05-15 12:24:18.869922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:50.572 [2024-05-15 12:24:18.870471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:50.572 [2024-05-15 12:24:18.870509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.572 [2024-05-15 12:24:18.870519] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.572 [2024-05-15 12:24:18.870963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.572 [2024-05-15 12:24:18.871414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.572 [2024-05-15 12:24:18.871432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f05ee0 with addr=10.0.0.2, port=4420 00:22:50.572 [2024-05-15 12:24:18.871443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05ee0 is same with the state(5) to be set 00:22:50.572 [2024-05-15 12:24:18.871838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.572 [2024-05-15 12:24:18.872229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.572 [2024-05-15 12:24:18.872245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f05a70 with addr=10.0.0.2, port=4420 00:22:50.572 [2024-05-15 12:24:18.872257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f05a70 is same with the state(5) to be set 00:22:50.572 [2024-05-15 12:24:18.872275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f18100 (9): Bad file descriptor 00:22:50.572 [2024-05-15 12:24:18.872723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.572 [2024-05-15 12:24:18.873158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.572 [2024-05-15 12:24:18.873172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f0610 with addr=10.0.0.2, port=4420 00:22:50.572 [2024-05-15 12:24:18.873183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f0610 is same with the state(5) to be set 00:22:50.572 [2024-05-15 12:24:18.873201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f05ee0 (9): Bad file descriptor 00:22:50.572 [2024-05-15 12:24:18.873213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f05a70 (9): Bad file descriptor 00:22:50.572 [2024-05-15 12:24:18.873223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:50.572 [2024-05-15 12:24:18.873231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:50.572 [2024-05-15 12:24:18.873241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:50.572 [2024-05-15 12:24:18.873270] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.572 [2024-05-15 12:24:18.873282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f0610 (9): Bad file descriptor 00:22:50.572 [2024-05-15 12:24:18.873297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:50.572 [2024-05-15 12:24:18.873311] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:50.572 [2024-05-15 12:24:18.873324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:50.572 [2024-05-15 12:24:18.873335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:50.572 [2024-05-15 12:24:18.873344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:50.572 [2024-05-15 12:24:18.873352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:50.572 [2024-05-15 12:24:18.873378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.572 [2024-05-15 12:24:18.873387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.572 [2024-05-15 12:24:18.873394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:50.572 [2024-05-15 12:24:18.873402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:50.572 [2024-05-15 12:24:18.873411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:50.572 [2024-05-15 12:24:18.873437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.831 12:24:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:50.831 12:24:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2200006 00:22:51.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2200006) - No such process 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.767 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.767 rmmod nvme_tcp 00:22:51.767 rmmod nvme_fabrics 00:22:52.026 rmmod nvme_keyring 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.026 12:24:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.930 12:24:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.930 00:22:53.930 real 0m8.463s 00:22:53.930 user 0m21.586s 00:22:53.930 sys 0m1.764s 00:22:53.930 12:24:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:53.930 12:24:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.930 ************************************ 00:22:53.930 END TEST nvmf_shutdown_tc3 00:22:53.930 ************************************ 00:22:53.930 12:24:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:53.930 00:22:53.930 real 0m33.765s 00:22:53.930 user 1m21.156s 00:22:53.930 sys 0m10.872s 00:22:53.930 12:24:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:53.930 12:24:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.930 ************************************ 00:22:53.930 END TEST nvmf_shutdown 00:22:53.930 ************************************ 00:22:54.188 12:24:22 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:54.189 12:24:22 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:54.189 12:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.189 12:24:22 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:54.189 12:24:22 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:54.189 12:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.189 12:24:22 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:22:54.189 12:24:22 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:54.189 12:24:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:22:54.189 12:24:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:54.189 12:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.189 ************************************ 00:22:54.189 START TEST nvmf_multicontroller 00:22:54.189 ************************************ 00:22:54.189 12:24:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:54.189 * Looking for test storage... 00:22:54.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.447 12:24:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:01.005 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:01.005 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:01.005 Found net devices under 0000:af:00.0: cvl_0_0 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:01.005 Found net devices under 0000:af:00.1: cvl_0_1 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.005 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:23:01.006 00:23:01.006 --- 10.0.0.2 ping statistics --- 00:23:01.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.006 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:23:01.006 00:23:01.006 --- 10.0.0.1 ping statistics --- 00:23:01.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.006 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.006 12:24:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2204303 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2204303 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2204303 ']' 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:01.006 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.006 [2024-05-15 12:24:29.079051] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:23:01.006 [2024-05-15 12:24:29.079098] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.006 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.006 [2024-05-15 12:24:29.150587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:01.006 [2024-05-15 12:24:29.219392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.006 [2024-05-15 12:24:29.219436] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.006 [2024-05-15 12:24:29.219446] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.006 [2024-05-15 12:24:29.219455] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.006 [2024-05-15 12:24:29.219479] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.006 [2024-05-15 12:24:29.219584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.006 [2024-05-15 12:24:29.219658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.006 [2024-05-15 12:24:29.219660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 [2024-05-15 12:24:29.928518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 Malloc0 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 [2024-05-15 12:24:29.991262] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:01.572 [2024-05-15 12:24:29.991519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 [2024-05-15 12:24:29.999399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 Malloc1 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.572 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2204587 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2204587 /var/tmp/bdevperf.sock 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2204587 ']' 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:01.573 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.509 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:02.509 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:23:02.509 12:24:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:02.509 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.509 12:24:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.767 NVMe0n1 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.767 1 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.767 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.767 request: 00:23:02.767 { 00:23:02.767 "name": "NVMe0", 00:23:02.767 "trtype": "tcp", 00:23:02.767 "traddr": "10.0.0.2", 00:23:02.767 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:02.767 "hostaddr": "10.0.0.2", 00:23:02.767 "hostsvcid": "60000", 00:23:02.767 "adrfam": "ipv4", 00:23:02.767 "trsvcid": "4420", 00:23:02.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.768 "method": "bdev_nvme_attach_controller", 00:23:02.768 "req_id": 1 00:23:02.768 } 00:23:02.768 Got JSON-RPC error response 00:23:02.768 response: 00:23:02.768 { 00:23:02.768 "code": -114, 00:23:02.768 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:02.768 } 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.768 request: 00:23:02.768 { 00:23:02.768 "name": "NVMe0", 00:23:02.768 "trtype": "tcp", 00:23:02.768 "traddr": "10.0.0.2", 00:23:02.768 "hostaddr": "10.0.0.2", 00:23:02.768 "hostsvcid": "60000", 00:23:02.768 "adrfam": "ipv4", 00:23:02.768 "trsvcid": "4420", 00:23:02.768 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:02.768 "method": "bdev_nvme_attach_controller", 00:23:02.768 "req_id": 1 00:23:02.768 } 00:23:02.768 Got JSON-RPC error response 00:23:02.768 response: 00:23:02.768 { 00:23:02.768 "code": -114, 00:23:02.768 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:02.768 } 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.768 request: 00:23:02.768 { 00:23:02.768 "name": "NVMe0", 00:23:02.768 "trtype": "tcp", 00:23:02.768 "traddr": "10.0.0.2", 00:23:02.768 "hostaddr": "10.0.0.2", 00:23:02.768 "hostsvcid": "60000", 00:23:02.768 "adrfam": "ipv4", 00:23:02.768 "trsvcid": "4420", 00:23:02.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.768 "multipath": "disable", 00:23:02.768 "method": "bdev_nvme_attach_controller", 00:23:02.768 "req_id": 1 00:23:02.768 } 00:23:02.768 Got JSON-RPC error response 00:23:02.768 response: 00:23:02.768 { 00:23:02.768 "code": -114, 00:23:02.768 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:02.768 } 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.768 request: 00:23:02.768 { 00:23:02.768 "name": "NVMe0", 00:23:02.768 "trtype": "tcp", 00:23:02.768 "traddr": "10.0.0.2", 00:23:02.768 "hostaddr": "10.0.0.2", 00:23:02.768 "hostsvcid": "60000", 00:23:02.768 "adrfam": "ipv4", 00:23:02.768 "trsvcid": "4420", 00:23:02.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.768 "multipath": "failover", 00:23:02.768 "method": "bdev_nvme_attach_controller", 00:23:02.768 "req_id": 1 00:23:02.768 } 00:23:02.768 Got JSON-RPC error response 00:23:02.768 response: 00:23:02.768 { 00:23:02.768 "code": -114, 00:23:02.768 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:02.768 } 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.768 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.049 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.049 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:03.049 12:24:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.442 0 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2204587 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2204587 ']' 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2204587 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2204587 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2204587' 00:23:04.442 killing process with pid 2204587 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2204587 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2204587 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:23:04.442 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:23:04.442 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:04.442 [2024-05-15 12:24:30.114230] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:23:04.442 [2024-05-15 12:24:30.114288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2204587 ] 00:23:04.443 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.443 [2024-05-15 12:24:30.184069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.443 [2024-05-15 12:24:30.254736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.443 [2024-05-15 12:24:31.492586] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name dba851de-9b19-44b4-9825-112d9654a6dd already exists 00:23:04.443 [2024-05-15 12:24:31.492617] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:dba851de-9b19-44b4-9825-112d9654a6dd alias for bdev NVMe1n1 00:23:04.443 [2024-05-15 12:24:31.492629] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:04.443 Running I/O for 1 seconds... 00:23:04.443 00:23:04.443 Latency(us) 00:23:04.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.443 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:04.443 NVMe0n1 : 1.01 23898.52 93.35 0.00 0.00 5339.37 4220.52 22858.96 00:23:04.443 =================================================================================================================== 00:23:04.443 Total : 23898.52 93.35 0.00 0.00 5339.37 4220.52 22858.96 00:23:04.443 Received shutdown signal, test time was about 1.000000 seconds 00:23:04.443 00:23:04.443 Latency(us) 00:23:04.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.443 =================================================================================================================== 00:23:04.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.443 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.443 12:24:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.443 rmmod nvme_tcp 00:23:04.702 rmmod nvme_fabrics 00:23:04.702 rmmod nvme_keyring 00:23:04.702 12:24:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2204303 ']' 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2204303 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2204303 ']' 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2204303 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2204303 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2204303' 00:23:04.702 killing process with pid 2204303 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2204303 00:23:04.702 [2024-05-15 12:24:33.060051] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:04.702 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2204303 00:23:04.961 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.961 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.961 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.961 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.961 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.961 12:24:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.961 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.961 12:24:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.864 12:24:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.864 00:23:06.864 real 0m12.779s 00:23:06.864 user 0m16.767s 00:23:06.864 sys 0m5.770s 00:23:06.864 12:24:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:06.864 12:24:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.864 ************************************ 00:23:06.864 END TEST nvmf_multicontroller 00:23:06.864 ************************************ 00:23:07.123 12:24:35 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:07.123 12:24:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:07.123 12:24:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:07.123 12:24:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.123 ************************************ 00:23:07.123 START TEST nvmf_aer 00:23:07.123 ************************************ 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:07.123 * Looking for test storage... 00:23:07.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:07.123 12:24:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:15.236 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:15.236 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:15.236 Found net devices under 0000:af:00.0: cvl_0_0 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:15.236 Found net devices under 0000:af:00.1: cvl_0_1 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.236 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:15.236 00:23:15.236 --- 10.0.0.2 ping statistics --- 00:23:15.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.237 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:15.237 00:23:15.237 --- 10.0.0.1 ping statistics --- 00:23:15.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.237 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2208825 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2208825 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 2208825 ']' 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:15.237 12:24:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 [2024-05-15 12:24:42.806299] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:23:15.237 [2024-05-15 12:24:42.806347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.237 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.237 [2024-05-15 12:24:42.878268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.237 [2024-05-15 12:24:42.953665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.237 [2024-05-15 12:24:42.953703] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.237 [2024-05-15 12:24:42.953713] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.237 [2024-05-15 12:24:42.953721] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.237 [2024-05-15 12:24:42.953745] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.237 [2024-05-15 12:24:42.953792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.237 [2024-05-15 12:24:42.953890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.237 [2024-05-15 12:24:42.953974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.237 [2024-05-15 12:24:42.953976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 [2024-05-15 12:24:43.658115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 Malloc0 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 [2024-05-15 12:24:43.712740] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:15.237 [2024-05-15 12:24:43.713020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.237 [ 00:23:15.237 { 00:23:15.237 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:15.237 "subtype": "Discovery", 00:23:15.237 "listen_addresses": [], 00:23:15.237 "allow_any_host": true, 00:23:15.237 "hosts": [] 00:23:15.237 }, 00:23:15.237 { 00:23:15.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.237 "subtype": "NVMe", 00:23:15.237 "listen_addresses": [ 00:23:15.237 { 00:23:15.237 "trtype": "TCP", 00:23:15.237 "adrfam": "IPv4", 00:23:15.237 "traddr": "10.0.0.2", 00:23:15.237 "trsvcid": "4420" 00:23:15.237 } 00:23:15.237 ], 00:23:15.237 "allow_any_host": true, 00:23:15.237 "hosts": [], 00:23:15.237 "serial_number": "SPDK00000000000001", 00:23:15.237 "model_number": "SPDK bdev Controller", 00:23:15.237 "max_namespaces": 2, 00:23:15.237 "min_cntlid": 1, 00:23:15.237 "max_cntlid": 65519, 00:23:15.237 "namespaces": [ 00:23:15.237 { 00:23:15.237 "nsid": 1, 00:23:15.237 "bdev_name": "Malloc0", 00:23:15.237 "name": "Malloc0", 00:23:15.237 "nguid": "3A4923A3EDAF494DBDB27A50C632749B", 00:23:15.237 "uuid": "3a4923a3-edaf-494d-bdb2-7a50c632749b" 00:23:15.237 } 00:23:15.237 ] 00:23:15.237 } 00:23:15.237 ] 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2208967 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:23:15.237 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:23:15.495 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.495 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:15.495 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:23:15.495 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:23:15.495 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:23:15.495 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:15.495 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 2 -lt 200 ']' 00:23:15.495 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=3 00:23:15.495 12:24:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.756 Malloc1 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.756 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.756 [ 00:23:15.756 { 00:23:15.756 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:15.756 "subtype": "Discovery", 00:23:15.756 "listen_addresses": [], 00:23:15.756 "allow_any_host": true, 00:23:15.756 "hosts": [] 00:23:15.756 }, 00:23:15.756 { 00:23:15.756 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.756 "subtype": "NVMe", 00:23:15.757 "listen_addresses": [ 00:23:15.757 { 00:23:15.757 "trtype": "TCP", 00:23:15.757 "adrfam": "IPv4", 00:23:15.757 "traddr": "10.0.0.2", 00:23:15.757 "trsvcid": "4420" 00:23:15.757 } 00:23:15.757 ], 00:23:15.757 "allow_any_host": true, 00:23:15.757 Asynchronous Event Request test 00:23:15.757 Attaching to 10.0.0.2 00:23:15.757 Attached to 10.0.0.2 00:23:15.757 Registering asynchronous event callbacks... 00:23:15.757 Starting namespace attribute notice tests for all controllers... 00:23:15.757 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:15.757 aer_cb - Changed Namespace 00:23:15.757 Cleaning up... 00:23:15.757 "hosts": [], 00:23:15.757 "serial_number": "SPDK00000000000001", 00:23:15.757 "model_number": "SPDK bdev Controller", 00:23:15.757 "max_namespaces": 2, 00:23:15.757 "min_cntlid": 1, 00:23:15.757 "max_cntlid": 65519, 00:23:15.757 "namespaces": [ 00:23:15.757 { 00:23:15.757 "nsid": 1, 00:23:15.757 "bdev_name": "Malloc0", 00:23:15.757 "name": "Malloc0", 00:23:15.757 "nguid": "3A4923A3EDAF494DBDB27A50C632749B", 00:23:15.757 "uuid": "3a4923a3-edaf-494d-bdb2-7a50c632749b" 00:23:15.757 }, 00:23:15.757 { 00:23:15.757 "nsid": 2, 00:23:15.757 "bdev_name": "Malloc1", 00:23:15.757 "name": "Malloc1", 00:23:15.757 "nguid": "EE78326A43C84DDEA5A9FA0A740B6517", 00:23:15.757 "uuid": "ee78326a-43c8-4dde-a5a9-fa0a740b6517" 00:23:15.757 } 00:23:15.757 ] 00:23:15.757 } 00:23:15.757 ] 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2208967 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.757 rmmod nvme_tcp 00:23:15.757 rmmod nvme_fabrics 00:23:15.757 rmmod nvme_keyring 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2208825 ']' 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2208825 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 2208825 ']' 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 2208825 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:15.757 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2208825 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2208825' 00:23:16.019 killing process with pid 2208825 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 2208825 00:23:16.019 [2024-05-15 12:24:44.333720] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 2208825 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.019 12:24:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.549 12:24:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.549 00:23:18.549 real 0m11.132s 00:23:18.549 user 0m8.196s 00:23:18.549 sys 0m5.981s 00:23:18.549 12:24:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:18.549 12:24:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 ************************************ 00:23:18.549 END TEST nvmf_aer 00:23:18.549 ************************************ 00:23:18.549 12:24:46 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:18.549 12:24:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:18.549 12:24:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:18.549 12:24:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 ************************************ 00:23:18.549 START TEST nvmf_async_init 00:23:18.549 ************************************ 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:18.549 * Looking for test storage... 00:23:18.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a2f066d192544a50b5da1d738722b15f 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:18.549 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.550 12:24:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:25.106 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:25.106 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:25.106 Found net devices under 0000:af:00.0: cvl_0_0 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:25.106 Found net devices under 0000:af:00.1: cvl_0_1 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.106 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:23:25.364 00:23:25.364 --- 10.0.0.2 ping statistics --- 00:23:25.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.364 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:23:25.364 00:23:25.364 --- 10.0.0.1 ping statistics --- 00:23:25.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.364 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:25.364 12:24:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2212805 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2212805 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 2212805 ']' 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:25.365 12:24:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:25.365 [2024-05-15 12:24:53.755575] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:23:25.365 [2024-05-15 12:24:53.755620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.365 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.365 [2024-05-15 12:24:53.827068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.622 [2024-05-15 12:24:53.905005] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.622 [2024-05-15 12:24:53.905042] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.622 [2024-05-15 12:24:53.905051] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.622 [2024-05-15 12:24:53.905060] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.623 [2024-05-15 12:24:53.905068] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.623 [2024-05-15 12:24:53.905090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.187 [2024-05-15 12:24:54.600198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.187 null0 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a2f066d192544a50b5da1d738722b15f 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.187 [2024-05-15 12:24:54.640242] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:26.187 [2024-05-15 12:24:54.640473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.187 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.445 nvme0n1 00:23:26.445 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.445 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:26.445 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.445 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.445 [ 00:23:26.445 { 00:23:26.445 "name": "nvme0n1", 00:23:26.445 "aliases": [ 00:23:26.445 "a2f066d1-9254-4a50-b5da-1d738722b15f" 00:23:26.445 ], 00:23:26.445 "product_name": "NVMe disk", 00:23:26.445 "block_size": 512, 00:23:26.445 "num_blocks": 2097152, 00:23:26.445 "uuid": "a2f066d1-9254-4a50-b5da-1d738722b15f", 00:23:26.445 "assigned_rate_limits": { 00:23:26.445 "rw_ios_per_sec": 0, 00:23:26.445 "rw_mbytes_per_sec": 0, 00:23:26.445 "r_mbytes_per_sec": 0, 00:23:26.445 "w_mbytes_per_sec": 0 00:23:26.445 }, 00:23:26.445 "claimed": false, 00:23:26.445 "zoned": false, 00:23:26.445 "supported_io_types": { 00:23:26.445 "read": true, 00:23:26.445 "write": true, 00:23:26.445 "unmap": false, 00:23:26.445 "write_zeroes": true, 00:23:26.445 "flush": true, 00:23:26.445 "reset": true, 00:23:26.445 "compare": true, 00:23:26.445 "compare_and_write": true, 00:23:26.445 "abort": true, 00:23:26.445 "nvme_admin": true, 00:23:26.445 "nvme_io": true 00:23:26.445 }, 00:23:26.445 "memory_domains": [ 00:23:26.445 { 00:23:26.445 "dma_device_id": "system", 00:23:26.445 "dma_device_type": 1 00:23:26.445 } 00:23:26.445 ], 00:23:26.445 "driver_specific": { 00:23:26.445 "nvme": [ 00:23:26.445 { 00:23:26.445 "trid": { 00:23:26.445 "trtype": "TCP", 00:23:26.445 "adrfam": "IPv4", 00:23:26.445 "traddr": "10.0.0.2", 00:23:26.445 "trsvcid": "4420", 00:23:26.445 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:26.445 }, 00:23:26.445 "ctrlr_data": { 00:23:26.445 "cntlid": 1, 00:23:26.445 "vendor_id": "0x8086", 00:23:26.445 "model_number": "SPDK bdev Controller", 00:23:26.445 "serial_number": "00000000000000000000", 00:23:26.445 "firmware_revision": "24.05", 00:23:26.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.445 "oacs": { 00:23:26.445 "security": 0, 00:23:26.445 "format": 0, 00:23:26.445 "firmware": 0, 00:23:26.445 "ns_manage": 0 00:23:26.445 }, 00:23:26.445 "multi_ctrlr": true, 00:23:26.445 "ana_reporting": false 00:23:26.445 }, 00:23:26.445 "vs": { 00:23:26.445 "nvme_version": "1.3" 00:23:26.445 }, 00:23:26.445 "ns_data": { 00:23:26.445 "id": 1, 00:23:26.445 "can_share": true 00:23:26.445 } 00:23:26.445 } 00:23:26.445 ], 00:23:26.445 "mp_policy": "active_passive" 00:23:26.445 } 00:23:26.445 } 00:23:26.445 ] 00:23:26.445 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.445 12:24:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:26.445 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.445 12:24:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.445 [2024-05-15 12:24:54.888921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:26.445 [2024-05-15 12:24:54.888995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c7f30 (9): Bad file descriptor 00:23:26.703 [2024-05-15 12:24:55.021300] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 [ 00:23:26.703 { 00:23:26.703 "name": "nvme0n1", 00:23:26.703 "aliases": [ 00:23:26.703 "a2f066d1-9254-4a50-b5da-1d738722b15f" 00:23:26.703 ], 00:23:26.703 "product_name": "NVMe disk", 00:23:26.703 "block_size": 512, 00:23:26.703 "num_blocks": 2097152, 00:23:26.703 "uuid": "a2f066d1-9254-4a50-b5da-1d738722b15f", 00:23:26.703 "assigned_rate_limits": { 00:23:26.703 "rw_ios_per_sec": 0, 00:23:26.703 "rw_mbytes_per_sec": 0, 00:23:26.703 "r_mbytes_per_sec": 0, 00:23:26.703 "w_mbytes_per_sec": 0 00:23:26.703 }, 00:23:26.703 "claimed": false, 00:23:26.703 "zoned": false, 00:23:26.703 "supported_io_types": { 00:23:26.703 "read": true, 00:23:26.703 "write": true, 00:23:26.703 "unmap": false, 00:23:26.703 "write_zeroes": true, 00:23:26.703 "flush": true, 00:23:26.703 "reset": true, 00:23:26.703 "compare": true, 00:23:26.703 "compare_and_write": true, 00:23:26.703 "abort": true, 00:23:26.703 "nvme_admin": true, 00:23:26.703 "nvme_io": true 00:23:26.703 }, 00:23:26.703 "memory_domains": [ 00:23:26.703 { 00:23:26.703 "dma_device_id": "system", 00:23:26.703 "dma_device_type": 1 00:23:26.703 } 00:23:26.703 ], 00:23:26.703 "driver_specific": { 00:23:26.703 "nvme": [ 00:23:26.703 { 00:23:26.703 "trid": { 00:23:26.703 "trtype": "TCP", 00:23:26.703 "adrfam": "IPv4", 00:23:26.703 "traddr": "10.0.0.2", 00:23:26.703 "trsvcid": "4420", 00:23:26.703 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:26.703 }, 00:23:26.703 "ctrlr_data": { 00:23:26.703 "cntlid": 2, 00:23:26.703 "vendor_id": "0x8086", 00:23:26.703 "model_number": "SPDK bdev Controller", 00:23:26.703 "serial_number": "00000000000000000000", 00:23:26.703 "firmware_revision": "24.05", 00:23:26.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.703 "oacs": { 00:23:26.703 "security": 0, 00:23:26.703 "format": 0, 00:23:26.703 "firmware": 0, 00:23:26.703 "ns_manage": 0 00:23:26.703 }, 00:23:26.703 "multi_ctrlr": true, 00:23:26.703 "ana_reporting": false 00:23:26.703 }, 00:23:26.703 "vs": { 00:23:26.703 "nvme_version": "1.3" 00:23:26.703 }, 00:23:26.703 "ns_data": { 00:23:26.703 "id": 1, 00:23:26.703 "can_share": true 00:23:26.703 } 00:23:26.703 } 00:23:26.703 ], 00:23:26.703 "mp_policy": "active_passive" 00:23:26.703 } 00:23:26.703 } 00:23:26.703 ] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.C6xqUAgqCy 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.C6xqUAgqCy 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 [2024-05-15 12:24:55.073497] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.703 [2024-05-15 12:24:55.073625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C6xqUAgqCy 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 [2024-05-15 12:24:55.081515] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C6xqUAgqCy 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 [2024-05-15 12:24:55.089536] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.703 [2024-05-15 12:24:55.089575] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:26.703 nvme0n1 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 [ 00:23:26.703 { 00:23:26.703 "name": "nvme0n1", 00:23:26.703 "aliases": [ 00:23:26.703 "a2f066d1-9254-4a50-b5da-1d738722b15f" 00:23:26.703 ], 00:23:26.703 "product_name": "NVMe disk", 00:23:26.703 "block_size": 512, 00:23:26.703 "num_blocks": 2097152, 00:23:26.703 "uuid": "a2f066d1-9254-4a50-b5da-1d738722b15f", 00:23:26.703 "assigned_rate_limits": { 00:23:26.703 "rw_ios_per_sec": 0, 00:23:26.703 "rw_mbytes_per_sec": 0, 00:23:26.703 "r_mbytes_per_sec": 0, 00:23:26.703 "w_mbytes_per_sec": 0 00:23:26.703 }, 00:23:26.703 "claimed": false, 00:23:26.703 "zoned": false, 00:23:26.703 "supported_io_types": { 00:23:26.703 "read": true, 00:23:26.703 "write": true, 00:23:26.703 "unmap": false, 00:23:26.703 "write_zeroes": true, 00:23:26.703 "flush": true, 00:23:26.703 "reset": true, 00:23:26.703 "compare": true, 00:23:26.703 "compare_and_write": true, 00:23:26.703 "abort": true, 00:23:26.703 "nvme_admin": true, 00:23:26.703 "nvme_io": true 00:23:26.703 }, 00:23:26.703 "memory_domains": [ 00:23:26.703 { 00:23:26.703 "dma_device_id": "system", 00:23:26.703 "dma_device_type": 1 00:23:26.703 } 00:23:26.703 ], 00:23:26.703 "driver_specific": { 00:23:26.703 "nvme": [ 00:23:26.703 { 00:23:26.703 "trid": { 00:23:26.703 "trtype": "TCP", 00:23:26.703 "adrfam": "IPv4", 00:23:26.703 "traddr": "10.0.0.2", 00:23:26.703 "trsvcid": "4421", 00:23:26.703 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:26.703 }, 00:23:26.703 "ctrlr_data": { 00:23:26.703 "cntlid": 3, 00:23:26.703 "vendor_id": "0x8086", 00:23:26.703 "model_number": "SPDK bdev Controller", 00:23:26.703 "serial_number": "00000000000000000000", 00:23:26.703 "firmware_revision": "24.05", 00:23:26.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.703 "oacs": { 00:23:26.703 "security": 0, 00:23:26.703 "format": 0, 00:23:26.703 "firmware": 0, 00:23:26.703 "ns_manage": 0 00:23:26.703 }, 00:23:26.703 "multi_ctrlr": true, 00:23:26.703 "ana_reporting": false 00:23:26.703 }, 00:23:26.703 "vs": { 00:23:26.703 "nvme_version": "1.3" 00:23:26.703 }, 00:23:26.703 "ns_data": { 00:23:26.703 "id": 1, 00:23:26.703 "can_share": true 00:23:26.703 } 00:23:26.703 } 00:23:26.703 ], 00:23:26.703 "mp_policy": "active_passive" 00:23:26.703 } 00:23:26.703 } 00:23:26.703 ] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.C6xqUAgqCy 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.703 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:26.704 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.704 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:26.704 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.704 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.704 rmmod nvme_tcp 00:23:26.704 rmmod nvme_fabrics 00:23:26.704 rmmod nvme_keyring 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2212805 ']' 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2212805 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 2212805 ']' 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 2212805 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2212805 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2212805' 00:23:26.961 killing process with pid 2212805 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 2212805 00:23:26.961 [2024-05-15 12:24:55.302041] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:26.961 [2024-05-15 12:24:55.302066] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:26.961 [2024-05-15 12:24:55.302076] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:26.961 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 2212805 00:23:27.219 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.219 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.219 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.219 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.219 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.219 12:24:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.219 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.219 12:24:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.155 12:24:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:29.155 00:23:29.155 real 0m10.837s 00:23:29.155 user 0m3.764s 00:23:29.155 sys 0m5.620s 00:23:29.155 12:24:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:29.155 12:24:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:29.155 ************************************ 00:23:29.155 END TEST nvmf_async_init 00:23:29.155 ************************************ 00:23:29.155 12:24:57 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:29.155 12:24:57 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:29.155 12:24:57 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:29.155 12:24:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:29.155 ************************************ 00:23:29.155 START TEST dma 00:23:29.155 ************************************ 00:23:29.155 12:24:57 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:29.413 * Looking for test storage... 00:23:29.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.413 12:24:57 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.413 12:24:57 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.413 12:24:57 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.413 12:24:57 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.413 12:24:57 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.413 12:24:57 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.413 12:24:57 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.413 12:24:57 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:29.413 12:24:57 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.413 12:24:57 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.413 12:24:57 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:29.413 12:24:57 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:29.413 00:23:29.413 real 0m0.128s 00:23:29.413 user 0m0.059s 00:23:29.413 sys 0m0.079s 00:23:29.413 12:24:57 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:29.413 12:24:57 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:29.413 ************************************ 00:23:29.413 END TEST dma 00:23:29.413 ************************************ 00:23:29.413 12:24:57 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:29.413 12:24:57 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:29.413 12:24:57 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:29.413 12:24:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:29.413 ************************************ 00:23:29.413 START TEST nvmf_identify 00:23:29.413 ************************************ 00:23:29.413 12:24:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:29.671 * Looking for test storage... 00:23:29.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.671 12:24:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.671 12:24:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:36.227 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:36.227 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.227 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:36.228 Found net devices under 0000:af:00.0: cvl_0_0 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:36.228 Found net devices under 0000:af:00.1: cvl_0_1 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:36.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:23:36.228 00:23:36.228 --- 10.0.0.2 ping statistics --- 00:23:36.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.228 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:23:36.228 00:23:36.228 --- 10.0.0.1 ping statistics --- 00:23:36.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.228 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2216796 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2216796 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 2216796 ']' 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:36.228 12:25:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.228 [2024-05-15 12:25:04.483047] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:23:36.228 [2024-05-15 12:25:04.483092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.228 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.228 [2024-05-15 12:25:04.557418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:36.228 [2024-05-15 12:25:04.628252] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.228 [2024-05-15 12:25:04.628295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.228 [2024-05-15 12:25:04.628304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.228 [2024-05-15 12:25:04.628313] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.228 [2024-05-15 12:25:04.628320] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.228 [2024-05-15 12:25:04.628372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.228 [2024-05-15 12:25:04.628462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.228 [2024-05-15 12:25:04.628531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.228 [2024-05-15 12:25:04.628533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.796 [2024-05-15 12:25:05.289079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:36.796 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.056 Malloc0 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.056 [2024-05-15 12:25:05.387549] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:37.056 [2024-05-15 12:25:05.387820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.056 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.056 [ 00:23:37.056 { 00:23:37.056 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:37.056 "subtype": "Discovery", 00:23:37.056 "listen_addresses": [ 00:23:37.056 { 00:23:37.056 "trtype": "TCP", 00:23:37.056 "adrfam": "IPv4", 00:23:37.056 "traddr": "10.0.0.2", 00:23:37.056 "trsvcid": "4420" 00:23:37.056 } 00:23:37.056 ], 00:23:37.056 "allow_any_host": true, 00:23:37.056 "hosts": [] 00:23:37.056 }, 00:23:37.056 { 00:23:37.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.056 "subtype": "NVMe", 00:23:37.056 "listen_addresses": [ 00:23:37.056 { 00:23:37.056 "trtype": "TCP", 00:23:37.056 "adrfam": "IPv4", 00:23:37.056 "traddr": "10.0.0.2", 00:23:37.056 "trsvcid": "4420" 00:23:37.056 } 00:23:37.056 ], 00:23:37.056 "allow_any_host": true, 00:23:37.056 "hosts": [], 00:23:37.056 "serial_number": "SPDK00000000000001", 00:23:37.056 "model_number": "SPDK bdev Controller", 00:23:37.056 "max_namespaces": 32, 00:23:37.056 "min_cntlid": 1, 00:23:37.056 "max_cntlid": 65519, 00:23:37.056 "namespaces": [ 00:23:37.056 { 00:23:37.057 "nsid": 1, 00:23:37.057 "bdev_name": "Malloc0", 00:23:37.057 "name": "Malloc0", 00:23:37.057 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:37.057 "eui64": "ABCDEF0123456789", 00:23:37.057 "uuid": "e3109561-bcb8-4543-adab-2c63775d7b72" 00:23:37.057 } 00:23:37.057 ] 00:23:37.057 } 00:23:37.057 ] 00:23:37.057 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.057 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:37.057 [2024-05-15 12:25:05.446398] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:23:37.057 [2024-05-15 12:25:05.446445] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216857 ] 00:23:37.057 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.057 [2024-05-15 12:25:05.476552] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:37.057 [2024-05-15 12:25:05.476602] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:37.057 [2024-05-15 12:25:05.476608] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:37.057 [2024-05-15 12:25:05.476620] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:37.057 [2024-05-15 12:25:05.476629] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:37.057 [2024-05-15 12:25:05.477128] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:37.057 [2024-05-15 12:25:05.477157] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x180fca0 0 00:23:37.057 [2024-05-15 12:25:05.491201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:37.057 [2024-05-15 12:25:05.491225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:37.057 [2024-05-15 12:25:05.491231] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:37.057 [2024-05-15 12:25:05.491236] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:37.057 [2024-05-15 12:25:05.491276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.491283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.491288] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.057 [2024-05-15 12:25:05.491304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:37.057 [2024-05-15 12:25:05.491321] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.057 [2024-05-15 12:25:05.499201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.057 [2024-05-15 12:25:05.499210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.057 [2024-05-15 12:25:05.499215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.057 [2024-05-15 12:25:05.499235] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:37.057 [2024-05-15 12:25:05.499243] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:37.057 [2024-05-15 12:25:05.499250] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:37.057 [2024-05-15 12:25:05.499262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.057 [2024-05-15 12:25:05.499280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.057 [2024-05-15 12:25:05.499293] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.057 [2024-05-15 12:25:05.499676] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.057 [2024-05-15 12:25:05.499683] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.057 [2024-05-15 12:25:05.499687] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499695] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.057 [2024-05-15 12:25:05.499702] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:37.057 [2024-05-15 12:25:05.499711] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:37.057 [2024-05-15 12:25:05.499719] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499723] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499728] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.057 [2024-05-15 12:25:05.499736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.057 [2024-05-15 12:25:05.499747] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.057 [2024-05-15 12:25:05.499877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.057 [2024-05-15 12:25:05.499885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.057 [2024-05-15 12:25:05.499890] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499895] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.057 [2024-05-15 12:25:05.499902] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:37.057 [2024-05-15 12:25:05.499913] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:37.057 [2024-05-15 12:25:05.499921] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499925] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.499930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.057 [2024-05-15 12:25:05.499937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.057 [2024-05-15 12:25:05.499950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.057 [2024-05-15 12:25:05.500234] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.057 [2024-05-15 12:25:05.500241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.057 [2024-05-15 12:25:05.500246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500250] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.057 [2024-05-15 12:25:05.500258] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:37.057 [2024-05-15 12:25:05.500269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500274] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500278] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.057 [2024-05-15 12:25:05.500285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.057 [2024-05-15 12:25:05.500297] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.057 [2024-05-15 12:25:05.500579] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.057 [2024-05-15 12:25:05.500586] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.057 [2024-05-15 12:25:05.500590] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.057 [2024-05-15 12:25:05.500602] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:37.057 [2024-05-15 12:25:05.500611] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:37.057 [2024-05-15 12:25:05.500620] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:37.057 [2024-05-15 12:25:05.500727] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:37.057 [2024-05-15 12:25:05.500734] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:37.057 [2024-05-15 12:25:05.500743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500752] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.057 [2024-05-15 12:25:05.500759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.057 [2024-05-15 12:25:05.500772] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.057 [2024-05-15 12:25:05.500897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.057 [2024-05-15 12:25:05.500905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.057 [2024-05-15 12:25:05.500910] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500914] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.057 [2024-05-15 12:25:05.500921] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:37.057 [2024-05-15 12:25:05.500932] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500937] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.500942] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.057 [2024-05-15 12:25:05.500949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.057 [2024-05-15 12:25:05.500961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.057 [2024-05-15 12:25:05.501084] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.057 [2024-05-15 12:25:05.501091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.057 [2024-05-15 12:25:05.501096] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.057 [2024-05-15 12:25:05.501101] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.057 [2024-05-15 12:25:05.501107] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:37.057 [2024-05-15 12:25:05.501114] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:37.057 [2024-05-15 12:25:05.501123] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:37.058 [2024-05-15 12:25:05.501139] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:37.058 [2024-05-15 12:25:05.501149] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.501154] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.501162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.058 [2024-05-15 12:25:05.501175] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.058 [2024-05-15 12:25:05.501395] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.058 [2024-05-15 12:25:05.501406] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.058 [2024-05-15 12:25:05.501411] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.501416] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x180fca0): datao=0, datal=4096, cccid=0 00:23:37.058 [2024-05-15 12:25:05.501422] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1879980) on tqpair(0x180fca0): expected_datao=0, payload_size=4096 00:23:37.058 [2024-05-15 12:25:05.501428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.501436] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.501442] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.058 [2024-05-15 12:25:05.542381] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.058 [2024-05-15 12:25:05.542386] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542392] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.058 [2024-05-15 12:25:05.542404] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:37.058 [2024-05-15 12:25:05.542411] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:37.058 [2024-05-15 12:25:05.542418] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:37.058 [2024-05-15 12:25:05.542427] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:37.058 [2024-05-15 12:25:05.542433] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:37.058 [2024-05-15 12:25:05.542441] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:37.058 [2024-05-15 12:25:05.542456] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:37.058 [2024-05-15 12:25:05.542467] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.542485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:37.058 [2024-05-15 12:25:05.542500] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.058 [2024-05-15 12:25:05.542622] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.058 [2024-05-15 12:25:05.542630] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.058 [2024-05-15 12:25:05.542634] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542639] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879980) on tqpair=0x180fca0 00:23:37.058 [2024-05-15 12:25:05.542652] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542661] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.542668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.058 [2024-05-15 12:25:05.542675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.542693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.058 [2024-05-15 12:25:05.542700] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542705] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542709] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.542716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.058 [2024-05-15 12:25:05.542722] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542732] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.542738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.058 [2024-05-15 12:25:05.542744] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:37.058 [2024-05-15 12:25:05.542754] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:37.058 [2024-05-15 12:25:05.542761] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.542773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.058 [2024-05-15 12:25:05.542787] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879980, cid 0, qid 0 00:23:37.058 [2024-05-15 12:25:05.542793] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879ae0, cid 1, qid 0 00:23:37.058 [2024-05-15 12:25:05.542799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879c40, cid 2, qid 0 00:23:37.058 [2024-05-15 12:25:05.542804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.058 [2024-05-15 12:25:05.542809] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879f00, cid 4, qid 0 00:23:37.058 [2024-05-15 12:25:05.542962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.058 [2024-05-15 12:25:05.542970] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.058 [2024-05-15 12:25:05.542975] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.542979] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879f00) on tqpair=0x180fca0 00:23:37.058 [2024-05-15 12:25:05.542990] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:37.058 [2024-05-15 12:25:05.542996] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:37.058 [2024-05-15 12:25:05.543009] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.543014] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.543021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.058 [2024-05-15 12:25:05.543034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879f00, cid 4, qid 0 00:23:37.058 [2024-05-15 12:25:05.543166] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.058 [2024-05-15 12:25:05.543174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.058 [2024-05-15 12:25:05.543178] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.543185] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x180fca0): datao=0, datal=4096, cccid=4 00:23:37.058 [2024-05-15 12:25:05.547200] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1879f00) on tqpair(0x180fca0): expected_datao=0, payload_size=4096 00:23:37.058 [2024-05-15 12:25:05.547208] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547222] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547227] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.058 [2024-05-15 12:25:05.547242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.058 [2024-05-15 12:25:05.547246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879f00) on tqpair=0x180fca0 00:23:37.058 [2024-05-15 12:25:05.547267] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:37.058 [2024-05-15 12:25:05.547294] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547299] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.547307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.058 [2024-05-15 12:25:05.547315] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547319] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547324] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x180fca0) 00:23:37.058 [2024-05-15 12:25:05.547330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.058 [2024-05-15 12:25:05.547348] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879f00, cid 4, qid 0 00:23:37.058 [2024-05-15 12:25:05.547355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187a060, cid 5, qid 0 00:23:37.058 [2024-05-15 12:25:05.547573] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.058 [2024-05-15 12:25:05.547582] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.058 [2024-05-15 12:25:05.547587] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547592] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x180fca0): datao=0, datal=1024, cccid=4 00:23:37.058 [2024-05-15 12:25:05.547597] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1879f00) on tqpair(0x180fca0): expected_datao=0, payload_size=1024 00:23:37.058 [2024-05-15 12:25:05.547603] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547610] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547615] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547621] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.058 [2024-05-15 12:25:05.547627] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.058 [2024-05-15 12:25:05.547632] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.058 [2024-05-15 12:25:05.547637] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187a060) on tqpair=0x180fca0 00:23:37.321 [2024-05-15 12:25:05.588333] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.321 [2024-05-15 12:25:05.588346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.321 [2024-05-15 12:25:05.588351] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.321 [2024-05-15 12:25:05.588356] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879f00) on tqpair=0x180fca0 00:23:37.321 [2024-05-15 12:25:05.588370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.321 [2024-05-15 12:25:05.588380] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x180fca0) 00:23:37.321 [2024-05-15 12:25:05.588389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.321 [2024-05-15 12:25:05.588409] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879f00, cid 4, qid 0 00:23:37.321 [2024-05-15 12:25:05.588569] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.321 [2024-05-15 12:25:05.588577] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.321 [2024-05-15 12:25:05.588581] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.321 [2024-05-15 12:25:05.588586] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x180fca0): datao=0, datal=3072, cccid=4 00:23:37.322 [2024-05-15 12:25:05.588592] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1879f00) on tqpair(0x180fca0): expected_datao=0, payload_size=3072 00:23:37.322 [2024-05-15 12:25:05.588598] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.588809] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.588814] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.629330] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.322 [2024-05-15 12:25:05.629346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.322 [2024-05-15 12:25:05.629351] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.629356] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879f00) on tqpair=0x180fca0 00:23:37.322 [2024-05-15 12:25:05.629368] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.629373] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x180fca0) 00:23:37.322 [2024-05-15 12:25:05.629381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.322 [2024-05-15 12:25:05.629400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879f00, cid 4, qid 0 00:23:37.322 [2024-05-15 12:25:05.629538] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.322 [2024-05-15 12:25:05.629546] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.322 [2024-05-15 12:25:05.629551] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.629555] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x180fca0): datao=0, datal=8, cccid=4 00:23:37.322 [2024-05-15 12:25:05.629561] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1879f00) on tqpair(0x180fca0): expected_datao=0, payload_size=8 00:23:37.322 [2024-05-15 12:25:05.629567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.629574] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.629579] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.673206] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.322 [2024-05-15 12:25:05.673220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.322 [2024-05-15 12:25:05.673225] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.322 [2024-05-15 12:25:05.673230] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879f00) on tqpair=0x180fca0 00:23:37.322 ===================================================== 00:23:37.322 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:37.322 ===================================================== 00:23:37.322 Controller Capabilities/Features 00:23:37.322 ================================ 00:23:37.322 Vendor ID: 0000 00:23:37.322 Subsystem Vendor ID: 0000 00:23:37.322 Serial Number: .................... 00:23:37.322 Model Number: ........................................ 00:23:37.322 Firmware Version: 24.05 00:23:37.322 Recommended Arb Burst: 0 00:23:37.322 IEEE OUI Identifier: 00 00 00 00:23:37.322 Multi-path I/O 00:23:37.322 May have multiple subsystem ports: No 00:23:37.322 May have multiple controllers: No 00:23:37.322 Associated with SR-IOV VF: No 00:23:37.322 Max Data Transfer Size: 131072 00:23:37.322 Max Number of Namespaces: 0 00:23:37.322 Max Number of I/O Queues: 1024 00:23:37.322 NVMe Specification Version (VS): 1.3 00:23:37.322 NVMe Specification Version (Identify): 1.3 00:23:37.322 Maximum Queue Entries: 128 00:23:37.322 Contiguous Queues Required: Yes 00:23:37.322 Arbitration Mechanisms Supported 00:23:37.322 Weighted Round Robin: Not Supported 00:23:37.322 Vendor Specific: Not Supported 00:23:37.322 Reset Timeout: 15000 ms 00:23:37.322 Doorbell Stride: 4 bytes 00:23:37.322 NVM Subsystem Reset: Not Supported 00:23:37.322 Command Sets Supported 00:23:37.322 NVM Command Set: Supported 00:23:37.322 Boot Partition: Not Supported 00:23:37.322 Memory Page Size Minimum: 4096 bytes 00:23:37.322 Memory Page Size Maximum: 4096 bytes 00:23:37.322 Persistent Memory Region: Not Supported 00:23:37.322 Optional Asynchronous Events Supported 00:23:37.322 Namespace Attribute Notices: Not Supported 00:23:37.322 Firmware Activation Notices: Not Supported 00:23:37.322 ANA Change Notices: Not Supported 00:23:37.322 PLE Aggregate Log Change Notices: Not Supported 00:23:37.322 LBA Status Info Alert Notices: Not Supported 00:23:37.322 EGE Aggregate Log Change Notices: Not Supported 00:23:37.322 Normal NVM Subsystem Shutdown event: Not Supported 00:23:37.322 Zone Descriptor Change Notices: Not Supported 00:23:37.322 Discovery Log Change Notices: Supported 00:23:37.322 Controller Attributes 00:23:37.322 128-bit Host Identifier: Not Supported 00:23:37.322 Non-Operational Permissive Mode: Not Supported 00:23:37.322 NVM Sets: Not Supported 00:23:37.322 Read Recovery Levels: Not Supported 00:23:37.322 Endurance Groups: Not Supported 00:23:37.322 Predictable Latency Mode: Not Supported 00:23:37.322 Traffic Based Keep ALive: Not Supported 00:23:37.322 Namespace Granularity: Not Supported 00:23:37.322 SQ Associations: Not Supported 00:23:37.322 UUID List: Not Supported 00:23:37.322 Multi-Domain Subsystem: Not Supported 00:23:37.322 Fixed Capacity Management: Not Supported 00:23:37.322 Variable Capacity Management: Not Supported 00:23:37.322 Delete Endurance Group: Not Supported 00:23:37.322 Delete NVM Set: Not Supported 00:23:37.322 Extended LBA Formats Supported: Not Supported 00:23:37.322 Flexible Data Placement Supported: Not Supported 00:23:37.322 00:23:37.322 Controller Memory Buffer Support 00:23:37.322 ================================ 00:23:37.322 Supported: No 00:23:37.322 00:23:37.322 Persistent Memory Region Support 00:23:37.322 ================================ 00:23:37.322 Supported: No 00:23:37.322 00:23:37.322 Admin Command Set Attributes 00:23:37.322 ============================ 00:23:37.322 Security Send/Receive: Not Supported 00:23:37.322 Format NVM: Not Supported 00:23:37.322 Firmware Activate/Download: Not Supported 00:23:37.322 Namespace Management: Not Supported 00:23:37.322 Device Self-Test: Not Supported 00:23:37.322 Directives: Not Supported 00:23:37.322 NVMe-MI: Not Supported 00:23:37.322 Virtualization Management: Not Supported 00:23:37.322 Doorbell Buffer Config: Not Supported 00:23:37.322 Get LBA Status Capability: Not Supported 00:23:37.322 Command & Feature Lockdown Capability: Not Supported 00:23:37.322 Abort Command Limit: 1 00:23:37.322 Async Event Request Limit: 4 00:23:37.322 Number of Firmware Slots: N/A 00:23:37.322 Firmware Slot 1 Read-Only: N/A 00:23:37.322 Firmware Activation Without Reset: N/A 00:23:37.322 Multiple Update Detection Support: N/A 00:23:37.322 Firmware Update Granularity: No Information Provided 00:23:37.322 Per-Namespace SMART Log: No 00:23:37.322 Asymmetric Namespace Access Log Page: Not Supported 00:23:37.322 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:37.322 Command Effects Log Page: Not Supported 00:23:37.322 Get Log Page Extended Data: Supported 00:23:37.322 Telemetry Log Pages: Not Supported 00:23:37.322 Persistent Event Log Pages: Not Supported 00:23:37.322 Supported Log Pages Log Page: May Support 00:23:37.322 Commands Supported & Effects Log Page: Not Supported 00:23:37.322 Feature Identifiers & Effects Log Page:May Support 00:23:37.322 NVMe-MI Commands & Effects Log Page: May Support 00:23:37.322 Data Area 4 for Telemetry Log: Not Supported 00:23:37.322 Error Log Page Entries Supported: 128 00:23:37.322 Keep Alive: Not Supported 00:23:37.322 00:23:37.322 NVM Command Set Attributes 00:23:37.322 ========================== 00:23:37.322 Submission Queue Entry Size 00:23:37.322 Max: 1 00:23:37.322 Min: 1 00:23:37.322 Completion Queue Entry Size 00:23:37.322 Max: 1 00:23:37.322 Min: 1 00:23:37.322 Number of Namespaces: 0 00:23:37.322 Compare Command: Not Supported 00:23:37.322 Write Uncorrectable Command: Not Supported 00:23:37.322 Dataset Management Command: Not Supported 00:23:37.322 Write Zeroes Command: Not Supported 00:23:37.322 Set Features Save Field: Not Supported 00:23:37.322 Reservations: Not Supported 00:23:37.322 Timestamp: Not Supported 00:23:37.322 Copy: Not Supported 00:23:37.322 Volatile Write Cache: Not Present 00:23:37.322 Atomic Write Unit (Normal): 1 00:23:37.322 Atomic Write Unit (PFail): 1 00:23:37.322 Atomic Compare & Write Unit: 1 00:23:37.322 Fused Compare & Write: Supported 00:23:37.322 Scatter-Gather List 00:23:37.322 SGL Command Set: Supported 00:23:37.322 SGL Keyed: Supported 00:23:37.322 SGL Bit Bucket Descriptor: Not Supported 00:23:37.322 SGL Metadata Pointer: Not Supported 00:23:37.322 Oversized SGL: Not Supported 00:23:37.322 SGL Metadata Address: Not Supported 00:23:37.322 SGL Offset: Supported 00:23:37.322 Transport SGL Data Block: Not Supported 00:23:37.322 Replay Protected Memory Block: Not Supported 00:23:37.322 00:23:37.322 Firmware Slot Information 00:23:37.322 ========================= 00:23:37.322 Active slot: 0 00:23:37.322 00:23:37.322 00:23:37.322 Error Log 00:23:37.322 ========= 00:23:37.322 00:23:37.322 Active Namespaces 00:23:37.323 ================= 00:23:37.323 Discovery Log Page 00:23:37.323 ================== 00:23:37.323 Generation Counter: 2 00:23:37.323 Number of Records: 2 00:23:37.323 Record Format: 0 00:23:37.323 00:23:37.323 Discovery Log Entry 0 00:23:37.323 ---------------------- 00:23:37.323 Transport Type: 3 (TCP) 00:23:37.323 Address Family: 1 (IPv4) 00:23:37.323 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:37.323 Entry Flags: 00:23:37.323 Duplicate Returned Information: 1 00:23:37.323 Explicit Persistent Connection Support for Discovery: 1 00:23:37.323 Transport Requirements: 00:23:37.323 Secure Channel: Not Required 00:23:37.323 Port ID: 0 (0x0000) 00:23:37.323 Controller ID: 65535 (0xffff) 00:23:37.323 Admin Max SQ Size: 128 00:23:37.323 Transport Service Identifier: 4420 00:23:37.323 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:37.323 Transport Address: 10.0.0.2 00:23:37.323 Discovery Log Entry 1 00:23:37.323 ---------------------- 00:23:37.323 Transport Type: 3 (TCP) 00:23:37.323 Address Family: 1 (IPv4) 00:23:37.323 Subsystem Type: 2 (NVM Subsystem) 00:23:37.323 Entry Flags: 00:23:37.323 Duplicate Returned Information: 0 00:23:37.323 Explicit Persistent Connection Support for Discovery: 0 00:23:37.323 Transport Requirements: 00:23:37.323 Secure Channel: Not Required 00:23:37.323 Port ID: 0 (0x0000) 00:23:37.323 Controller ID: 65535 (0xffff) 00:23:37.323 Admin Max SQ Size: 128 00:23:37.323 Transport Service Identifier: 4420 00:23:37.323 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:37.323 Transport Address: 10.0.0.2 [2024-05-15 12:25:05.673318] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:37.323 [2024-05-15 12:25:05.673332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.323 [2024-05-15 12:25:05.673340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.323 [2024-05-15 12:25:05.673347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.323 [2024-05-15 12:25:05.673356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.323 [2024-05-15 12:25:05.673365] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673374] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.673382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.673398] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.673552] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.323 [2024-05-15 12:25:05.673560] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.323 [2024-05-15 12:25:05.673564] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673569] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.323 [2024-05-15 12:25:05.673578] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673588] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.673595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.673612] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.673781] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.323 [2024-05-15 12:25:05.673788] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.323 [2024-05-15 12:25:05.673792] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673797] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.323 [2024-05-15 12:25:05.673804] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:37.323 [2024-05-15 12:25:05.673810] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:37.323 [2024-05-15 12:25:05.673822] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673826] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.673838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.673850] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.673980] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.323 [2024-05-15 12:25:05.673987] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.323 [2024-05-15 12:25:05.673991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.673996] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.323 [2024-05-15 12:25:05.674007] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674017] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.674024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.674035] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.674159] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.323 [2024-05-15 12:25:05.674166] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.323 [2024-05-15 12:25:05.674170] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674175] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.323 [2024-05-15 12:25:05.674187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674201] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.674213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.674226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.674512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.323 [2024-05-15 12:25:05.674519] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.323 [2024-05-15 12:25:05.674523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674528] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.323 [2024-05-15 12:25:05.674540] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674549] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.674556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.674567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.674688] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.323 [2024-05-15 12:25:05.674695] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.323 [2024-05-15 12:25:05.674700] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674704] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.323 [2024-05-15 12:25:05.674716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674721] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.674725] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.674733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.674744] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.675032] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.323 [2024-05-15 12:25:05.675038] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.323 [2024-05-15 12:25:05.675043] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.675047] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.323 [2024-05-15 12:25:05.675059] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.675063] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.675068] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.675075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.675086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.675215] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.323 [2024-05-15 12:25:05.675226] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.323 [2024-05-15 12:25:05.675231] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.675236] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.323 [2024-05-15 12:25:05.675248] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.675253] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.323 [2024-05-15 12:25:05.675257] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.323 [2024-05-15 12:25:05.675264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.323 [2024-05-15 12:25:05.675277] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.323 [2024-05-15 12:25:05.675402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.675409] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.675414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675419] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.675431] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675435] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675440] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.675447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.675459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.675578] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.675584] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.675589] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675594] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.675605] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675610] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.675621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.675633] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.675751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.675758] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.675763] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675768] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.675778] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675783] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.675795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.675807] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.675929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.675936] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.675943] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675948] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.675960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.675969] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.675976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.675988] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.676109] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.676116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.676121] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676125] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.676137] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676142] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676146] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.676153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.676165] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.676446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.676453] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.676457] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676462] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.676473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676478] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676483] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.676490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.676502] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.676781] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.676788] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.676793] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676797] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.676809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676814] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676818] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.676825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.676836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.676960] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.676967] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.676971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676979] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.676991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.676996] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.677000] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.677007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.677019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.677142] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.677149] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.677153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.677158] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.677170] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.677175] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.677179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x180fca0) 00:23:37.324 [2024-05-15 12:25:05.677186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.324 [2024-05-15 12:25:05.681207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1879da0, cid 3, qid 0 00:23:37.324 [2024-05-15 12:25:05.681433] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.324 [2024-05-15 12:25:05.681441] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.324 [2024-05-15 12:25:05.681445] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.681450] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1879da0) on tqpair=0x180fca0 00:23:37.324 [2024-05-15 12:25:05.681461] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:37.324 00:23:37.324 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:37.324 [2024-05-15 12:25:05.719707] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:23:37.324 [2024-05-15 12:25:05.719752] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216939 ] 00:23:37.324 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.324 [2024-05-15 12:25:05.752280] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:37.324 [2024-05-15 12:25:05.752323] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:37.324 [2024-05-15 12:25:05.752329] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:37.324 [2024-05-15 12:25:05.752341] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:37.324 [2024-05-15 12:25:05.752350] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:37.324 [2024-05-15 12:25:05.752799] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:37.324 [2024-05-15 12:25:05.752822] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17f7ca0 0 00:23:37.324 [2024-05-15 12:25:05.767199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:37.324 [2024-05-15 12:25:05.767223] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:37.324 [2024-05-15 12:25:05.767229] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:37.324 [2024-05-15 12:25:05.767233] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:37.324 [2024-05-15 12:25:05.767269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.767276] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.324 [2024-05-15 12:25:05.767281] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.324 [2024-05-15 12:25:05.767294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:37.324 [2024-05-15 12:25:05.767312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.325 [2024-05-15 12:25:05.774202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.325 [2024-05-15 12:25:05.774213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.325 [2024-05-15 12:25:05.774218] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774223] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.325 [2024-05-15 12:25:05.774235] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:37.325 [2024-05-15 12:25:05.774242] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:37.325 [2024-05-15 12:25:05.774249] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:37.325 [2024-05-15 12:25:05.774260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774265] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774270] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.325 [2024-05-15 12:25:05.774279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.325 [2024-05-15 12:25:05.774294] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.325 [2024-05-15 12:25:05.774529] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.325 [2024-05-15 12:25:05.774537] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.325 [2024-05-15 12:25:05.774542] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774547] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.325 [2024-05-15 12:25:05.774554] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:37.325 [2024-05-15 12:25:05.774564] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:37.325 [2024-05-15 12:25:05.774572] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774577] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774581] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.325 [2024-05-15 12:25:05.774589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.325 [2024-05-15 12:25:05.774603] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.325 [2024-05-15 12:25:05.774775] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.325 [2024-05-15 12:25:05.774783] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.325 [2024-05-15 12:25:05.774787] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774792] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.325 [2024-05-15 12:25:05.774802] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:37.325 [2024-05-15 12:25:05.774811] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:37.325 [2024-05-15 12:25:05.774819] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774828] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.325 [2024-05-15 12:25:05.774835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.325 [2024-05-15 12:25:05.774848] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.325 [2024-05-15 12:25:05.774974] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.325 [2024-05-15 12:25:05.774981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.325 [2024-05-15 12:25:05.774986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.774991] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.325 [2024-05-15 12:25:05.774998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:37.325 [2024-05-15 12:25:05.775009] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775014] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.325 [2024-05-15 12:25:05.775026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.325 [2024-05-15 12:25:05.775038] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.325 [2024-05-15 12:25:05.775198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.325 [2024-05-15 12:25:05.775206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.325 [2024-05-15 12:25:05.775210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.325 [2024-05-15 12:25:05.775222] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:37.325 [2024-05-15 12:25:05.775228] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:37.325 [2024-05-15 12:25:05.775237] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:37.325 [2024-05-15 12:25:05.775344] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:37.325 [2024-05-15 12:25:05.775349] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:37.325 [2024-05-15 12:25:05.775358] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775363] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.325 [2024-05-15 12:25:05.775374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.325 [2024-05-15 12:25:05.775387] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.325 [2024-05-15 12:25:05.775565] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.325 [2024-05-15 12:25:05.775572] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.325 [2024-05-15 12:25:05.775580] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775584] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.325 [2024-05-15 12:25:05.775591] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:37.325 [2024-05-15 12:25:05.775602] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775607] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.325 [2024-05-15 12:25:05.775618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.325 [2024-05-15 12:25:05.775630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.325 [2024-05-15 12:25:05.775766] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.325 [2024-05-15 12:25:05.775773] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.325 [2024-05-15 12:25:05.775778] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775782] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.325 [2024-05-15 12:25:05.775789] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:37.325 [2024-05-15 12:25:05.775795] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:37.325 [2024-05-15 12:25:05.775805] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:37.325 [2024-05-15 12:25:05.775815] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:37.325 [2024-05-15 12:25:05.775825] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.775830] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.325 [2024-05-15 12:25:05.775837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.325 [2024-05-15 12:25:05.775850] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.325 [2024-05-15 12:25:05.776023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.325 [2024-05-15 12:25:05.776031] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.325 [2024-05-15 12:25:05.776035] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.776040] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17f7ca0): datao=0, datal=4096, cccid=0 00:23:37.325 [2024-05-15 12:25:05.776046] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1861980) on tqpair(0x17f7ca0): expected_datao=0, payload_size=4096 00:23:37.325 [2024-05-15 12:25:05.776052] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.776060] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.776064] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.776140] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.325 [2024-05-15 12:25:05.776147] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.325 [2024-05-15 12:25:05.776152] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.776156] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.325 [2024-05-15 12:25:05.776166] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:37.325 [2024-05-15 12:25:05.776174] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:37.325 [2024-05-15 12:25:05.776180] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:37.325 [2024-05-15 12:25:05.776185] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:37.325 [2024-05-15 12:25:05.776198] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:37.325 [2024-05-15 12:25:05.776205] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:37.325 [2024-05-15 12:25:05.776218] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:37.325 [2024-05-15 12:25:05.776228] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.325 [2024-05-15 12:25:05.776233] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776238] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.776246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:37.326 [2024-05-15 12:25:05.776260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.326 [2024-05-15 12:25:05.776386] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.326 [2024-05-15 12:25:05.776393] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.326 [2024-05-15 12:25:05.776398] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861980) on tqpair=0x17f7ca0 00:23:37.326 [2024-05-15 12:25:05.776414] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776419] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776423] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.776430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.326 [2024-05-15 12:25:05.776437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776441] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.776452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.326 [2024-05-15 12:25:05.776459] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776468] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.776474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.326 [2024-05-15 12:25:05.776481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776485] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776490] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.776496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.326 [2024-05-15 12:25:05.776502] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.776512] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.776521] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776526] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.776533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.326 [2024-05-15 12:25:05.776547] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861980, cid 0, qid 0 00:23:37.326 [2024-05-15 12:25:05.776553] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861ae0, cid 1, qid 0 00:23:37.326 [2024-05-15 12:25:05.776558] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861c40, cid 2, qid 0 00:23:37.326 [2024-05-15 12:25:05.776564] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.326 [2024-05-15 12:25:05.776569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861f00, cid 4, qid 0 00:23:37.326 [2024-05-15 12:25:05.776746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.326 [2024-05-15 12:25:05.776754] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.326 [2024-05-15 12:25:05.776758] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776763] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861f00) on tqpair=0x17f7ca0 00:23:37.326 [2024-05-15 12:25:05.776773] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:37.326 [2024-05-15 12:25:05.776780] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.776790] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.776798] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.776805] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776810] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776814] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.776821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:37.326 [2024-05-15 12:25:05.776834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861f00, cid 4, qid 0 00:23:37.326 [2024-05-15 12:25:05.776965] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.326 [2024-05-15 12:25:05.776972] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.326 [2024-05-15 12:25:05.776977] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.776981] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861f00) on tqpair=0x17f7ca0 00:23:37.326 [2024-05-15 12:25:05.777028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.777039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.777048] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.777053] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.777060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.326 [2024-05-15 12:25:05.777073] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861f00, cid 4, qid 0 00:23:37.326 [2024-05-15 12:25:05.777217] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.326 [2024-05-15 12:25:05.777228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.326 [2024-05-15 12:25:05.777233] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.777238] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17f7ca0): datao=0, datal=4096, cccid=4 00:23:37.326 [2024-05-15 12:25:05.777244] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1861f00) on tqpair(0x17f7ca0): expected_datao=0, payload_size=4096 00:23:37.326 [2024-05-15 12:25:05.777249] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.777447] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.777453] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.818433] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.326 [2024-05-15 12:25:05.818447] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.326 [2024-05-15 12:25:05.818452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.818457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861f00) on tqpair=0x17f7ca0 00:23:37.326 [2024-05-15 12:25:05.818473] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:37.326 [2024-05-15 12:25:05.818486] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.818497] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:37.326 [2024-05-15 12:25:05.818505] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.326 [2024-05-15 12:25:05.818510] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17f7ca0) 00:23:37.326 [2024-05-15 12:25:05.818518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.326 [2024-05-15 12:25:05.818533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861f00, cid 4, qid 0 00:23:37.326 [2024-05-15 12:25:05.818679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.326 [2024-05-15 12:25:05.818687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.326 [2024-05-15 12:25:05.818691] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.327 [2024-05-15 12:25:05.818696] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17f7ca0): datao=0, datal=4096, cccid=4 00:23:37.327 [2024-05-15 12:25:05.818702] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1861f00) on tqpair(0x17f7ca0): expected_datao=0, payload_size=4096 00:23:37.327 [2024-05-15 12:25:05.818708] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.327 [2024-05-15 12:25:05.818938] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.327 [2024-05-15 12:25:05.818943] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.859441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.588 [2024-05-15 12:25:05.859454] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.588 [2024-05-15 12:25:05.859459] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.859464] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861f00) on tqpair=0x17f7ca0 00:23:37.588 [2024-05-15 12:25:05.859477] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:37.588 [2024-05-15 12:25:05.859489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:37.588 [2024-05-15 12:25:05.859498] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.859503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.859510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.859529] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861f00, cid 4, qid 0 00:23:37.588 [2024-05-15 12:25:05.859658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.588 [2024-05-15 12:25:05.859665] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.588 [2024-05-15 12:25:05.859670] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.859675] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17f7ca0): datao=0, datal=4096, cccid=4 00:23:37.588 [2024-05-15 12:25:05.859681] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1861f00) on tqpair(0x17f7ca0): expected_datao=0, payload_size=4096 00:23:37.588 [2024-05-15 12:25:05.859686] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.859894] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.859898] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.900498] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.588 [2024-05-15 12:25:05.900511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.588 [2024-05-15 12:25:05.900515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.900520] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861f00) on tqpair=0x17f7ca0 00:23:37.588 [2024-05-15 12:25:05.900535] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:37.588 [2024-05-15 12:25:05.900546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:37.588 [2024-05-15 12:25:05.900555] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:37.588 [2024-05-15 12:25:05.900562] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:37.588 [2024-05-15 12:25:05.900569] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:37.588 [2024-05-15 12:25:05.900575] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:37.588 [2024-05-15 12:25:05.900581] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:37.588 [2024-05-15 12:25:05.900588] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:37.588 [2024-05-15 12:25:05.900605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.900610] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.900618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.900625] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.900630] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.900635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.900641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.588 [2024-05-15 12:25:05.900658] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861f00, cid 4, qid 0 00:23:37.588 [2024-05-15 12:25:05.900664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862060, cid 5, qid 0 00:23:37.588 [2024-05-15 12:25:05.900844] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.588 [2024-05-15 12:25:05.900852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.588 [2024-05-15 12:25:05.900859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.900864] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861f00) on tqpair=0x17f7ca0 00:23:37.588 [2024-05-15 12:25:05.900872] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.588 [2024-05-15 12:25:05.900878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.588 [2024-05-15 12:25:05.900883] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.900887] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1862060) on tqpair=0x17f7ca0 00:23:37.588 [2024-05-15 12:25:05.900899] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.900904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.900911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.900924] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862060, cid 5, qid 0 00:23:37.588 [2024-05-15 12:25:05.901052] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.588 [2024-05-15 12:25:05.901059] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.588 [2024-05-15 12:25:05.901063] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.901068] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1862060) on tqpair=0x17f7ca0 00:23:37.588 [2024-05-15 12:25:05.901080] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.901084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.901091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.901103] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862060, cid 5, qid 0 00:23:37.588 [2024-05-15 12:25:05.905201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.588 [2024-05-15 12:25:05.905210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.588 [2024-05-15 12:25:05.905215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.905219] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1862060) on tqpair=0x17f7ca0 00:23:37.588 [2024-05-15 12:25:05.905232] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.905237] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.905244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.905257] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862060, cid 5, qid 0 00:23:37.588 [2024-05-15 12:25:05.905656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.588 [2024-05-15 12:25:05.905663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.588 [2024-05-15 12:25:05.905667] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.905672] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1862060) on tqpair=0x17f7ca0 00:23:37.588 [2024-05-15 12:25:05.905687] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.905692] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.905698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.905706] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.905711] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.905717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.905728] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.905733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.905739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.905750] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.588 [2024-05-15 12:25:05.905755] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17f7ca0) 00:23:37.588 [2024-05-15 12:25:05.905761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.588 [2024-05-15 12:25:05.905774] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862060, cid 5, qid 0 00:23:37.588 [2024-05-15 12:25:05.905779] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861f00, cid 4, qid 0 00:23:37.588 [2024-05-15 12:25:05.905785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18621c0, cid 6, qid 0 00:23:37.588 [2024-05-15 12:25:05.905790] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862320, cid 7, qid 0 00:23:37.588 [2024-05-15 12:25:05.906139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.588 [2024-05-15 12:25:05.906145] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.588 [2024-05-15 12:25:05.906150] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906154] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17f7ca0): datao=0, datal=8192, cccid=5 00:23:37.589 [2024-05-15 12:25:05.906160] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1862060) on tqpair(0x17f7ca0): expected_datao=0, payload_size=8192 00:23:37.589 [2024-05-15 12:25:05.906166] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906593] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906599] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906605] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.589 [2024-05-15 12:25:05.906611] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.589 [2024-05-15 12:25:05.906615] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906620] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17f7ca0): datao=0, datal=512, cccid=4 00:23:37.589 [2024-05-15 12:25:05.906626] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1861f00) on tqpair(0x17f7ca0): expected_datao=0, payload_size=512 00:23:37.589 [2024-05-15 12:25:05.906631] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906638] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906642] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906648] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.589 [2024-05-15 12:25:05.906655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.589 [2024-05-15 12:25:05.906659] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906663] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17f7ca0): datao=0, datal=512, cccid=6 00:23:37.589 [2024-05-15 12:25:05.906669] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18621c0) on tqpair(0x17f7ca0): expected_datao=0, payload_size=512 00:23:37.589 [2024-05-15 12:25:05.906675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906681] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906686] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906694] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:37.589 [2024-05-15 12:25:05.906700] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:37.589 [2024-05-15 12:25:05.906705] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906709] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17f7ca0): datao=0, datal=4096, cccid=7 00:23:37.589 [2024-05-15 12:25:05.906715] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1862320) on tqpair(0x17f7ca0): expected_datao=0, payload_size=4096 00:23:37.589 [2024-05-15 12:25:05.906720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906727] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906732] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906938] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.589 [2024-05-15 12:25:05.906944] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.589 [2024-05-15 12:25:05.906949] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906953] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1862060) on tqpair=0x17f7ca0 00:23:37.589 [2024-05-15 12:25:05.906967] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.589 [2024-05-15 12:25:05.906973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.589 [2024-05-15 12:25:05.906978] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.906983] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861f00) on tqpair=0x17f7ca0 00:23:37.589 [2024-05-15 12:25:05.906993] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.589 [2024-05-15 12:25:05.906999] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.589 [2024-05-15 12:25:05.907004] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.907008] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18621c0) on tqpair=0x17f7ca0 00:23:37.589 [2024-05-15 12:25:05.907019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.589 [2024-05-15 12:25:05.907025] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.589 [2024-05-15 12:25:05.907030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.589 [2024-05-15 12:25:05.907034] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1862320) on tqpair=0x17f7ca0 00:23:37.589 ===================================================== 00:23:37.589 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.589 ===================================================== 00:23:37.589 Controller Capabilities/Features 00:23:37.589 ================================ 00:23:37.589 Vendor ID: 8086 00:23:37.589 Subsystem Vendor ID: 8086 00:23:37.589 Serial Number: SPDK00000000000001 00:23:37.589 Model Number: SPDK bdev Controller 00:23:37.589 Firmware Version: 24.05 00:23:37.589 Recommended Arb Burst: 6 00:23:37.589 IEEE OUI Identifier: e4 d2 5c 00:23:37.589 Multi-path I/O 00:23:37.589 May have multiple subsystem ports: Yes 00:23:37.589 May have multiple controllers: Yes 00:23:37.589 Associated with SR-IOV VF: No 00:23:37.589 Max Data Transfer Size: 131072 00:23:37.589 Max Number of Namespaces: 32 00:23:37.589 Max Number of I/O Queues: 127 00:23:37.589 NVMe Specification Version (VS): 1.3 00:23:37.589 NVMe Specification Version (Identify): 1.3 00:23:37.589 Maximum Queue Entries: 128 00:23:37.589 Contiguous Queues Required: Yes 00:23:37.589 Arbitration Mechanisms Supported 00:23:37.589 Weighted Round Robin: Not Supported 00:23:37.589 Vendor Specific: Not Supported 00:23:37.589 Reset Timeout: 15000 ms 00:23:37.589 Doorbell Stride: 4 bytes 00:23:37.589 NVM Subsystem Reset: Not Supported 00:23:37.589 Command Sets Supported 00:23:37.589 NVM Command Set: Supported 00:23:37.589 Boot Partition: Not Supported 00:23:37.589 Memory Page Size Minimum: 4096 bytes 00:23:37.589 Memory Page Size Maximum: 4096 bytes 00:23:37.589 Persistent Memory Region: Not Supported 00:23:37.589 Optional Asynchronous Events Supported 00:23:37.589 Namespace Attribute Notices: Supported 00:23:37.589 Firmware Activation Notices: Not Supported 00:23:37.589 ANA Change Notices: Not Supported 00:23:37.589 PLE Aggregate Log Change Notices: Not Supported 00:23:37.589 LBA Status Info Alert Notices: Not Supported 00:23:37.589 EGE Aggregate Log Change Notices: Not Supported 00:23:37.589 Normal NVM Subsystem Shutdown event: Not Supported 00:23:37.589 Zone Descriptor Change Notices: Not Supported 00:23:37.589 Discovery Log Change Notices: Not Supported 00:23:37.589 Controller Attributes 00:23:37.589 128-bit Host Identifier: Supported 00:23:37.589 Non-Operational Permissive Mode: Not Supported 00:23:37.589 NVM Sets: Not Supported 00:23:37.589 Read Recovery Levels: Not Supported 00:23:37.589 Endurance Groups: Not Supported 00:23:37.589 Predictable Latency Mode: Not Supported 00:23:37.589 Traffic Based Keep ALive: Not Supported 00:23:37.589 Namespace Granularity: Not Supported 00:23:37.589 SQ Associations: Not Supported 00:23:37.589 UUID List: Not Supported 00:23:37.589 Multi-Domain Subsystem: Not Supported 00:23:37.589 Fixed Capacity Management: Not Supported 00:23:37.589 Variable Capacity Management: Not Supported 00:23:37.589 Delete Endurance Group: Not Supported 00:23:37.589 Delete NVM Set: Not Supported 00:23:37.589 Extended LBA Formats Supported: Not Supported 00:23:37.589 Flexible Data Placement Supported: Not Supported 00:23:37.589 00:23:37.589 Controller Memory Buffer Support 00:23:37.589 ================================ 00:23:37.589 Supported: No 00:23:37.589 00:23:37.589 Persistent Memory Region Support 00:23:37.589 ================================ 00:23:37.589 Supported: No 00:23:37.589 00:23:37.589 Admin Command Set Attributes 00:23:37.589 ============================ 00:23:37.589 Security Send/Receive: Not Supported 00:23:37.589 Format NVM: Not Supported 00:23:37.589 Firmware Activate/Download: Not Supported 00:23:37.589 Namespace Management: Not Supported 00:23:37.589 Device Self-Test: Not Supported 00:23:37.589 Directives: Not Supported 00:23:37.589 NVMe-MI: Not Supported 00:23:37.589 Virtualization Management: Not Supported 00:23:37.589 Doorbell Buffer Config: Not Supported 00:23:37.589 Get LBA Status Capability: Not Supported 00:23:37.589 Command & Feature Lockdown Capability: Not Supported 00:23:37.589 Abort Command Limit: 4 00:23:37.589 Async Event Request Limit: 4 00:23:37.589 Number of Firmware Slots: N/A 00:23:37.589 Firmware Slot 1 Read-Only: N/A 00:23:37.589 Firmware Activation Without Reset: N/A 00:23:37.589 Multiple Update Detection Support: N/A 00:23:37.589 Firmware Update Granularity: No Information Provided 00:23:37.589 Per-Namespace SMART Log: No 00:23:37.589 Asymmetric Namespace Access Log Page: Not Supported 00:23:37.589 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:37.589 Command Effects Log Page: Supported 00:23:37.589 Get Log Page Extended Data: Supported 00:23:37.589 Telemetry Log Pages: Not Supported 00:23:37.589 Persistent Event Log Pages: Not Supported 00:23:37.589 Supported Log Pages Log Page: May Support 00:23:37.589 Commands Supported & Effects Log Page: Not Supported 00:23:37.589 Feature Identifiers & Effects Log Page:May Support 00:23:37.589 NVMe-MI Commands & Effects Log Page: May Support 00:23:37.589 Data Area 4 for Telemetry Log: Not Supported 00:23:37.589 Error Log Page Entries Supported: 128 00:23:37.589 Keep Alive: Supported 00:23:37.589 Keep Alive Granularity: 10000 ms 00:23:37.589 00:23:37.589 NVM Command Set Attributes 00:23:37.589 ========================== 00:23:37.589 Submission Queue Entry Size 00:23:37.589 Max: 64 00:23:37.589 Min: 64 00:23:37.589 Completion Queue Entry Size 00:23:37.589 Max: 16 00:23:37.589 Min: 16 00:23:37.589 Number of Namespaces: 32 00:23:37.589 Compare Command: Supported 00:23:37.590 Write Uncorrectable Command: Not Supported 00:23:37.590 Dataset Management Command: Supported 00:23:37.590 Write Zeroes Command: Supported 00:23:37.590 Set Features Save Field: Not Supported 00:23:37.590 Reservations: Supported 00:23:37.590 Timestamp: Not Supported 00:23:37.590 Copy: Supported 00:23:37.590 Volatile Write Cache: Present 00:23:37.590 Atomic Write Unit (Normal): 1 00:23:37.590 Atomic Write Unit (PFail): 1 00:23:37.590 Atomic Compare & Write Unit: 1 00:23:37.590 Fused Compare & Write: Supported 00:23:37.590 Scatter-Gather List 00:23:37.590 SGL Command Set: Supported 00:23:37.590 SGL Keyed: Supported 00:23:37.590 SGL Bit Bucket Descriptor: Not Supported 00:23:37.590 SGL Metadata Pointer: Not Supported 00:23:37.590 Oversized SGL: Not Supported 00:23:37.590 SGL Metadata Address: Not Supported 00:23:37.590 SGL Offset: Supported 00:23:37.590 Transport SGL Data Block: Not Supported 00:23:37.590 Replay Protected Memory Block: Not Supported 00:23:37.590 00:23:37.590 Firmware Slot Information 00:23:37.590 ========================= 00:23:37.590 Active slot: 1 00:23:37.590 Slot 1 Firmware Revision: 24.05 00:23:37.590 00:23:37.590 00:23:37.590 Commands Supported and Effects 00:23:37.590 ============================== 00:23:37.590 Admin Commands 00:23:37.590 -------------- 00:23:37.590 Get Log Page (02h): Supported 00:23:37.590 Identify (06h): Supported 00:23:37.590 Abort (08h): Supported 00:23:37.590 Set Features (09h): Supported 00:23:37.590 Get Features (0Ah): Supported 00:23:37.590 Asynchronous Event Request (0Ch): Supported 00:23:37.590 Keep Alive (18h): Supported 00:23:37.590 I/O Commands 00:23:37.590 ------------ 00:23:37.590 Flush (00h): Supported LBA-Change 00:23:37.590 Write (01h): Supported LBA-Change 00:23:37.590 Read (02h): Supported 00:23:37.590 Compare (05h): Supported 00:23:37.590 Write Zeroes (08h): Supported LBA-Change 00:23:37.590 Dataset Management (09h): Supported LBA-Change 00:23:37.590 Copy (19h): Supported LBA-Change 00:23:37.590 Unknown (79h): Supported LBA-Change 00:23:37.590 Unknown (7Ah): Supported 00:23:37.590 00:23:37.590 Error Log 00:23:37.590 ========= 00:23:37.590 00:23:37.590 Arbitration 00:23:37.590 =========== 00:23:37.590 Arbitration Burst: 1 00:23:37.590 00:23:37.590 Power Management 00:23:37.590 ================ 00:23:37.590 Number of Power States: 1 00:23:37.590 Current Power State: Power State #0 00:23:37.590 Power State #0: 00:23:37.590 Max Power: 0.00 W 00:23:37.590 Non-Operational State: Operational 00:23:37.590 Entry Latency: Not Reported 00:23:37.590 Exit Latency: Not Reported 00:23:37.590 Relative Read Throughput: 0 00:23:37.590 Relative Read Latency: 0 00:23:37.590 Relative Write Throughput: 0 00:23:37.590 Relative Write Latency: 0 00:23:37.590 Idle Power: Not Reported 00:23:37.590 Active Power: Not Reported 00:23:37.590 Non-Operational Permissive Mode: Not Supported 00:23:37.590 00:23:37.590 Health Information 00:23:37.590 ================== 00:23:37.590 Critical Warnings: 00:23:37.590 Available Spare Space: OK 00:23:37.590 Temperature: OK 00:23:37.590 Device Reliability: OK 00:23:37.590 Read Only: No 00:23:37.590 Volatile Memory Backup: OK 00:23:37.590 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:37.590 Temperature Threshold: [2024-05-15 12:25:05.907123] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907129] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17f7ca0) 00:23:37.590 [2024-05-15 12:25:05.907136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.590 [2024-05-15 12:25:05.907148] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1862320, cid 7, qid 0 00:23:37.590 [2024-05-15 12:25:05.907382] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.590 [2024-05-15 12:25:05.907392] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.590 [2024-05-15 12:25:05.907396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907401] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1862320) on tqpair=0x17f7ca0 00:23:37.590 [2024-05-15 12:25:05.907434] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:37.590 [2024-05-15 12:25:05.907448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.590 [2024-05-15 12:25:05.907455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.590 [2024-05-15 12:25:05.907462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.590 [2024-05-15 12:25:05.907469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.590 [2024-05-15 12:25:05.907481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907486] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.590 [2024-05-15 12:25:05.907498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.590 [2024-05-15 12:25:05.907513] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.590 [2024-05-15 12:25:05.907643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.590 [2024-05-15 12:25:05.907650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.590 [2024-05-15 12:25:05.907654] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907659] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861da0) on tqpair=0x17f7ca0 00:23:37.590 [2024-05-15 12:25:05.907668] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907673] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.590 [2024-05-15 12:25:05.907685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.590 [2024-05-15 12:25:05.907701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.590 [2024-05-15 12:25:05.907833] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.590 [2024-05-15 12:25:05.907840] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.590 [2024-05-15 12:25:05.907845] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861da0) on tqpair=0x17f7ca0 00:23:37.590 [2024-05-15 12:25:05.907857] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:37.590 [2024-05-15 12:25:05.907862] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:37.590 [2024-05-15 12:25:05.907873] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907878] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.907883] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.590 [2024-05-15 12:25:05.907890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.590 [2024-05-15 12:25:05.907902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.590 [2024-05-15 12:25:05.908232] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.590 [2024-05-15 12:25:05.908240] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.590 [2024-05-15 12:25:05.908244] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861da0) on tqpair=0x17f7ca0 00:23:37.590 [2024-05-15 12:25:05.908260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908265] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908270] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.590 [2024-05-15 12:25:05.908277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.590 [2024-05-15 12:25:05.908289] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.590 [2024-05-15 12:25:05.908606] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.590 [2024-05-15 12:25:05.908612] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.590 [2024-05-15 12:25:05.908620] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908625] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861da0) on tqpair=0x17f7ca0 00:23:37.590 [2024-05-15 12:25:05.908636] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908641] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908645] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.590 [2024-05-15 12:25:05.908652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.590 [2024-05-15 12:25:05.908664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.590 [2024-05-15 12:25:05.908825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.590 [2024-05-15 12:25:05.908832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.590 [2024-05-15 12:25:05.908836] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908841] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861da0) on tqpair=0x17f7ca0 00:23:37.590 [2024-05-15 12:25:05.908853] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908858] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.590 [2024-05-15 12:25:05.908862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.590 [2024-05-15 12:25:05.908869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.590 [2024-05-15 12:25:05.908881] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.590 [2024-05-15 12:25:05.909002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.591 [2024-05-15 12:25:05.909009] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.591 [2024-05-15 12:25:05.909013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.591 [2024-05-15 12:25:05.909018] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861da0) on tqpair=0x17f7ca0 00:23:37.591 [2024-05-15 12:25:05.909029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.591 [2024-05-15 12:25:05.909033] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.591 [2024-05-15 12:25:05.909038] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.591 [2024-05-15 12:25:05.909045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.591 [2024-05-15 12:25:05.909056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.591 [2024-05-15 12:25:05.913199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.591 [2024-05-15 12:25:05.913208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.591 [2024-05-15 12:25:05.913213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.591 [2024-05-15 12:25:05.913217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861da0) on tqpair=0x17f7ca0 00:23:37.591 [2024-05-15 12:25:05.913230] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:37.591 [2024-05-15 12:25:05.913234] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:37.591 [2024-05-15 12:25:05.913239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17f7ca0) 00:23:37.591 [2024-05-15 12:25:05.913246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.591 [2024-05-15 12:25:05.913259] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1861da0, cid 3, qid 0 00:23:37.591 [2024-05-15 12:25:05.913658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:37.591 [2024-05-15 12:25:05.913665] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:37.591 [2024-05-15 12:25:05.913672] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:37.591 [2024-05-15 12:25:05.913677] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1861da0) on tqpair=0x17f7ca0 00:23:37.591 [2024-05-15 12:25:05.913687] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:37.591 0 Kelvin (-273 Celsius) 00:23:37.591 Available Spare: 0% 00:23:37.591 Available Spare Threshold: 0% 00:23:37.591 Life Percentage Used: 0% 00:23:37.591 Data Units Read: 0 00:23:37.591 Data Units Written: 0 00:23:37.591 Host Read Commands: 0 00:23:37.591 Host Write Commands: 0 00:23:37.591 Controller Busy Time: 0 minutes 00:23:37.591 Power Cycles: 0 00:23:37.591 Power On Hours: 0 hours 00:23:37.591 Unsafe Shutdowns: 0 00:23:37.591 Unrecoverable Media Errors: 0 00:23:37.591 Lifetime Error Log Entries: 0 00:23:37.591 Warning Temperature Time: 0 minutes 00:23:37.591 Critical Temperature Time: 0 minutes 00:23:37.591 00:23:37.591 Number of Queues 00:23:37.591 ================ 00:23:37.591 Number of I/O Submission Queues: 127 00:23:37.591 Number of I/O Completion Queues: 127 00:23:37.591 00:23:37.591 Active Namespaces 00:23:37.591 ================= 00:23:37.591 Namespace ID:1 00:23:37.591 Error Recovery Timeout: Unlimited 00:23:37.591 Command Set Identifier: NVM (00h) 00:23:37.591 Deallocate: Supported 00:23:37.591 Deallocated/Unwritten Error: Not Supported 00:23:37.591 Deallocated Read Value: Unknown 00:23:37.591 Deallocate in Write Zeroes: Not Supported 00:23:37.591 Deallocated Guard Field: 0xFFFF 00:23:37.591 Flush: Supported 00:23:37.591 Reservation: Supported 00:23:37.591 Namespace Sharing Capabilities: Multiple Controllers 00:23:37.591 Size (in LBAs): 131072 (0GiB) 00:23:37.591 Capacity (in LBAs): 131072 (0GiB) 00:23:37.591 Utilization (in LBAs): 131072 (0GiB) 00:23:37.591 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:37.591 EUI64: ABCDEF0123456789 00:23:37.591 UUID: e3109561-bcb8-4543-adab-2c63775d7b72 00:23:37.591 Thin Provisioning: Not Supported 00:23:37.591 Per-NS Atomic Units: Yes 00:23:37.591 Atomic Boundary Size (Normal): 0 00:23:37.591 Atomic Boundary Size (PFail): 0 00:23:37.591 Atomic Boundary Offset: 0 00:23:37.591 Maximum Single Source Range Length: 65535 00:23:37.591 Maximum Copy Length: 65535 00:23:37.591 Maximum Source Range Count: 1 00:23:37.591 NGUID/EUI64 Never Reused: No 00:23:37.591 Namespace Write Protected: No 00:23:37.591 Number of LBA Formats: 1 00:23:37.591 Current LBA Format: LBA Format #00 00:23:37.591 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:37.591 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.591 12:25:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.591 rmmod nvme_tcp 00:23:37.591 rmmod nvme_fabrics 00:23:37.591 rmmod nvme_keyring 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2216796 ']' 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2216796 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 2216796 ']' 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 2216796 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2216796 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2216796' 00:23:37.591 killing process with pid 2216796 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 2216796 00:23:37.591 [2024-05-15 12:25:06.072799] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:37.591 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 2216796 00:23:37.851 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.851 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.851 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.851 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.851 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.851 12:25:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.851 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.851 12:25:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.386 12:25:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:40.386 00:23:40.386 real 0m10.497s 00:23:40.386 user 0m8.175s 00:23:40.386 sys 0m5.392s 00:23:40.386 12:25:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:40.386 12:25:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.386 ************************************ 00:23:40.386 END TEST nvmf_identify 00:23:40.386 ************************************ 00:23:40.386 12:25:08 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:40.386 12:25:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:40.386 12:25:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:40.386 12:25:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.386 ************************************ 00:23:40.386 START TEST nvmf_perf 00:23:40.386 ************************************ 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:40.386 * Looking for test storage... 00:23:40.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.386 12:25:08 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:40.387 12:25:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:46.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:46.955 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:46.956 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:46.956 Found net devices under 0000:af:00.0: cvl_0_0 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:46.956 Found net devices under 0000:af:00.1: cvl_0_1 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:23:46.956 00:23:46.956 --- 10.0.0.2 ping statistics --- 00:23:46.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.956 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:23:46.956 00:23:46.956 --- 10.0.0.1 ping statistics --- 00:23:46.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.956 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2220587 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2220587 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 2220587 ']' 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:46.956 12:25:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.956 [2024-05-15 12:25:15.026989] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:23:46.956 [2024-05-15 12:25:15.027038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.956 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.956 [2024-05-15 12:25:15.100467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.956 [2024-05-15 12:25:15.174544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.956 [2024-05-15 12:25:15.174584] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.956 [2024-05-15 12:25:15.174594] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.956 [2024-05-15 12:25:15.174602] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.956 [2024-05-15 12:25:15.174609] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.956 [2024-05-15 12:25:15.174652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.956 [2024-05-15 12:25:15.174747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.956 [2024-05-15 12:25:15.174829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.956 [2024-05-15 12:25:15.174831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.523 12:25:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:47.523 12:25:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:23:47.523 12:25:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.523 12:25:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:47.523 12:25:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:47.523 12:25:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.523 12:25:15 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:47.523 12:25:15 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:50.842 12:25:18 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:50.842 12:25:18 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:50.842 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:23:50.842 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:50.842 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:50.842 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:23:50.842 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:50.842 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:50.842 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:51.101 [2024-05-15 12:25:19.451831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.101 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:51.359 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:51.359 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:51.359 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:51.359 12:25:19 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:51.616 12:25:20 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.874 [2024-05-15 12:25:20.210387] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:51.875 [2024-05-15 12:25:20.210665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.875 12:25:20 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:52.133 12:25:20 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:23:52.133 12:25:20 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:52.133 12:25:20 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:52.133 12:25:20 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:53.507 Initializing NVMe Controllers 00:23:53.507 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:23:53.507 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:23:53.507 Initialization complete. Launching workers. 00:23:53.507 ======================================================== 00:23:53.507 Latency(us) 00:23:53.507 Device Information : IOPS MiB/s Average min max 00:23:53.507 PCIE (0000:d8:00.0) NSID 1 from core 0: 101704.49 397.28 315.32 30.66 7192.74 00:23:53.507 ======================================================== 00:23:53.507 Total : 101704.49 397.28 315.32 30.66 7192.74 00:23:53.507 00:23:53.507 12:25:21 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:53.507 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.442 Initializing NVMe Controllers 00:23:54.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:54.442 Initialization complete. Launching workers. 00:23:54.442 ======================================================== 00:23:54.442 Latency(us) 00:23:54.442 Device Information : IOPS MiB/s Average min max 00:23:54.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12999.29 478.11 45445.10 00:23:54.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18650.42 7963.82 47905.43 00:23:54.442 ======================================================== 00:23:54.442 Total : 136.00 0.53 15326.22 478.11 47905.43 00:23:54.442 00:23:54.442 12:25:22 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:54.701 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.076 Initializing NVMe Controllers 00:23:56.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:56.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:56.076 Initialization complete. Launching workers. 00:23:56.076 ======================================================== 00:23:56.076 Latency(us) 00:23:56.076 Device Information : IOPS MiB/s Average min max 00:23:56.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8533.99 33.34 3757.34 671.46 8509.69 00:23:56.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.00 15.10 8330.71 6855.79 15988.70 00:23:56.076 ======================================================== 00:23:56.076 Total : 12399.99 48.44 5183.20 671.46 15988.70 00:23:56.076 00:23:56.076 12:25:24 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:56.076 12:25:24 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:56.076 12:25:24 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:56.076 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.629 Initializing NVMe Controllers 00:23:58.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.629 Controller IO queue size 128, less than required. 00:23:58.629 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.629 Controller IO queue size 128, less than required. 00:23:58.629 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:58.629 Initialization complete. Launching workers. 00:23:58.629 ======================================================== 00:23:58.629 Latency(us) 00:23:58.629 Device Information : IOPS MiB/s Average min max 00:23:58.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 883.95 220.99 149123.08 96989.67 236655.98 00:23:58.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.96 146.24 226582.75 85947.11 359032.70 00:23:58.629 ======================================================== 00:23:58.629 Total : 1468.91 367.23 179969.85 85947.11 359032.70 00:23:58.629 00:23:58.629 12:25:26 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:58.629 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.886 No valid NVMe controllers or AIO or URING devices found 00:23:58.886 Initializing NVMe Controllers 00:23:58.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.886 Controller IO queue size 128, less than required. 00:23:58.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.886 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:58.886 Controller IO queue size 128, less than required. 00:23:58.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.886 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:58.887 WARNING: Some requested NVMe devices were skipped 00:23:58.887 12:25:27 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:58.887 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.414 Initializing NVMe Controllers 00:24:01.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.414 Controller IO queue size 128, less than required. 00:24:01.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.414 Controller IO queue size 128, less than required. 00:24:01.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:01.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:01.414 Initialization complete. Launching workers. 00:24:01.414 00:24:01.414 ==================== 00:24:01.414 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:01.414 TCP transport: 00:24:01.414 polls: 57107 00:24:01.414 idle_polls: 21564 00:24:01.414 sock_completions: 35543 00:24:01.414 nvme_completions: 3939 00:24:01.414 submitted_requests: 5940 00:24:01.414 queued_requests: 1 00:24:01.414 00:24:01.414 ==================== 00:24:01.414 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:01.414 TCP transport: 00:24:01.414 polls: 51899 00:24:01.414 idle_polls: 14319 00:24:01.414 sock_completions: 37580 00:24:01.414 nvme_completions: 3743 00:24:01.414 submitted_requests: 5500 00:24:01.414 queued_requests: 1 00:24:01.414 ======================================================== 00:24:01.414 Latency(us) 00:24:01.414 Device Information : IOPS MiB/s Average min max 00:24:01.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 982.48 245.62 136366.07 76234.36 229195.87 00:24:01.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 933.58 233.40 144850.04 85353.37 221902.10 00:24:01.414 ======================================================== 00:24:01.414 Total : 1916.06 479.02 140499.80 76234.36 229195.87 00:24:01.414 00:24:01.672 12:25:29 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:01.672 12:25:29 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.672 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.672 rmmod nvme_tcp 00:24:01.672 rmmod nvme_fabrics 00:24:01.929 rmmod nvme_keyring 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2220587 ']' 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2220587 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 2220587 ']' 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 2220587 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2220587 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2220587' 00:24:01.929 killing process with pid 2220587 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 2220587 00:24:01.929 [2024-05-15 12:25:30.284080] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:01.929 12:25:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 2220587 00:24:04.454 12:25:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:04.454 12:25:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:04.454 12:25:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:04.454 12:25:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.454 12:25:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.454 12:25:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.454 12:25:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.454 12:25:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.353 12:25:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:06.353 00:24:06.353 real 0m26.025s 00:24:06.353 user 1m8.893s 00:24:06.353 sys 0m8.264s 00:24:06.353 12:25:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:06.353 12:25:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 ************************************ 00:24:06.353 END TEST nvmf_perf 00:24:06.353 ************************************ 00:24:06.353 12:25:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:06.353 12:25:34 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:06.353 12:25:34 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:06.353 12:25:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.353 ************************************ 00:24:06.353 START TEST nvmf_fio_host 00:24:06.353 ************************************ 00:24:06.353 12:25:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:06.353 * Looking for test storage... 00:24:06.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.353 12:25:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.353 12:25:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.353 12:25:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.353 12:25:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.353 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.353 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.354 12:25:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:12.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.955 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:12.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:12.956 Found net devices under 0000:af:00.0: cvl_0_0 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:12.956 Found net devices under 0000:af:00.1: cvl_0_1 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.956 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:13.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:24:13.214 00:24:13.214 --- 10.0.0.2 ping statistics --- 00:24:13.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.214 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:24:13.214 00:24:13.214 --- 10.0.0.1 ping statistics --- 00:24:13.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.214 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:13.214 12:25:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=2227282 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 2227282 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 2227282 ']' 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:13.472 12:25:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.472 [2024-05-15 12:25:41.814686] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:24:13.472 [2024-05-15 12:25:41.814734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.472 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.472 [2024-05-15 12:25:41.887550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.472 [2024-05-15 12:25:41.961769] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.472 [2024-05-15 12:25:41.961806] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.472 [2024-05-15 12:25:41.961815] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.472 [2024-05-15 12:25:41.961823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.472 [2024-05-15 12:25:41.961831] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.472 [2024-05-15 12:25:41.961877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.472 [2024-05-15 12:25:41.961971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.472 [2024-05-15 12:25:41.962055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.472 [2024-05-15 12:25:41.962057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 [2024-05-15 12:25:42.616885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 Malloc1 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 [2024-05-15 12:25:42.715584] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:14.405 [2024-05-15 12:25:42.715841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:14.405 12:25:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:14.663 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:14.663 fio-3.35 00:24:14.663 Starting 1 thread 00:24:14.663 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.187 00:24:17.187 test: (groupid=0, jobs=1): err= 0: pid=2227719: Wed May 15 12:25:45 2024 00:24:17.187 read: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(92.0MiB/2005msec) 00:24:17.187 slat (nsec): min=1556, max=242890, avg=1686.00, stdev=2193.18 00:24:17.187 clat (usec): min=3282, max=14571, avg=6290.79, stdev=1512.46 00:24:17.187 lat (usec): min=3283, max=14573, avg=6292.48, stdev=1512.57 00:24:17.187 clat percentiles (usec): 00:24:17.187 | 1.00th=[ 4146], 5.00th=[ 4752], 10.00th=[ 5014], 20.00th=[ 5407], 00:24:17.187 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6063], 00:24:17.187 | 70.00th=[ 6325], 80.00th=[ 6718], 90.00th=[ 8094], 95.00th=[ 9765], 00:24:17.187 | 99.00th=[12256], 99.50th=[13042], 99.90th=[14353], 99.95th=[14484], 00:24:17.187 | 99.99th=[14615] 00:24:17.187 bw ( KiB/s): min=44688, max=48072, per=99.98%, avg=46976.00, stdev=1553.11, samples=4 00:24:17.187 iops : min=11172, max=12018, avg=11744.00, stdev=388.28, samples=4 00:24:17.187 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.4MiB/2005msec); 0 zone resets 00:24:17.187 slat (nsec): min=1612, max=246774, avg=1760.13, stdev=1774.42 00:24:17.187 clat (usec): min=2097, max=11174, avg=4561.57, stdev=854.90 00:24:17.187 lat (usec): min=2099, max=11190, avg=4563.33, stdev=855.09 00:24:17.187 clat percentiles (usec): 00:24:17.187 | 1.00th=[ 2737], 5.00th=[ 3228], 10.00th=[ 3523], 20.00th=[ 3949], 00:24:17.187 | 30.00th=[ 4228], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4686], 00:24:17.187 | 70.00th=[ 4817], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 6128], 00:24:17.187 | 99.00th=[ 7439], 99.50th=[ 7832], 99.90th=[ 9110], 99.95th=[ 9896], 00:24:17.187 | 99.99th=[10945] 00:24:17.187 bw ( KiB/s): min=45016, max=47456, per=99.98%, avg=46690.00, stdev=1141.48, samples=4 00:24:17.187 iops : min=11254, max=11864, avg=11672.50, stdev=285.37, samples=4 00:24:17.187 lat (msec) : 4=10.86%, 10=86.94%, 20=2.20% 00:24:17.187 cpu : usr=63.72%, sys=30.09%, ctx=20, majf=0, minf=4 00:24:17.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:17.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.187 issued rwts: total=23552,23408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.187 00:24:17.188 Run status group 0 (all jobs): 00:24:17.188 READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=92.0MiB (96.5MB), run=2005-2005msec 00:24:17.188 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.4MiB (95.9MB), run=2005-2005msec 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:17.188 12:25:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:17.445 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:17.445 fio-3.35 00:24:17.445 Starting 1 thread 00:24:17.445 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.972 00:24:19.972 test: (groupid=0, jobs=1): err= 0: pid=2228281: Wed May 15 12:25:48 2024 00:24:19.972 read: IOPS=9959, BW=156MiB/s (163MB/s)(312MiB/2003msec) 00:24:19.972 slat (nsec): min=2438, max=79168, avg=2729.90, stdev=1259.17 00:24:19.972 clat (usec): min=1464, max=36129, avg=7737.94, stdev=3125.71 00:24:19.972 lat (usec): min=1466, max=36131, avg=7740.67, stdev=3126.06 00:24:19.972 clat percentiles (usec): 00:24:19.972 | 1.00th=[ 3687], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5604], 00:24:19.972 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7111], 60.00th=[ 7701], 00:24:19.972 | 70.00th=[ 8356], 80.00th=[ 9110], 90.00th=[10552], 95.00th=[11994], 00:24:19.972 | 99.00th=[21627], 99.50th=[23200], 99.90th=[25297], 99.95th=[25560], 00:24:19.972 | 99.99th=[35914] 00:24:19.972 bw ( KiB/s): min=77984, max=88096, per=51.30%, avg=81752.00, stdev=4800.31, samples=4 00:24:19.972 iops : min= 4874, max= 5506, avg=5109.50, stdev=300.02, samples=4 00:24:19.972 write: IOPS=5879, BW=91.9MiB/s (96.3MB/s)(166MiB/1807msec); 0 zone resets 00:24:19.972 slat (usec): min=28, max=376, avg=30.15, stdev= 7.44 00:24:19.972 clat (usec): min=1504, max=30696, avg=8827.97, stdev=2983.55 00:24:19.972 lat (usec): min=1533, max=30728, avg=8858.11, stdev=2986.37 00:24:19.972 clat percentiles (usec): 00:24:19.972 | 1.00th=[ 5604], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7111], 00:24:19.973 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 8717], 00:24:19.973 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10552], 95.00th=[12256], 00:24:19.973 | 99.00th=[25297], 99.50th=[26084], 99.90th=[28181], 99.95th=[28443], 00:24:19.973 | 99.99th=[28705] 00:24:19.973 bw ( KiB/s): min=80352, max=91552, per=90.21%, avg=84864.00, stdev=5091.79, samples=4 00:24:19.973 iops : min= 5022, max= 5722, avg=5304.00, stdev=318.24, samples=4 00:24:19.973 lat (msec) : 2=0.03%, 4=1.47%, 10=84.70%, 20=11.78%, 50=2.02% 00:24:19.973 cpu : usr=81.12%, sys=15.13%, ctx=16, majf=0, minf=1 00:24:19.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:19.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:19.973 issued rwts: total=19949,10624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:19.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:19.973 00:24:19.973 Run status group 0 (all jobs): 00:24:19.973 READ: bw=156MiB/s (163MB/s), 156MiB/s-156MiB/s (163MB/s-163MB/s), io=312MiB (327MB), run=2003-2003msec 00:24:19.973 WRITE: bw=91.9MiB/s (96.3MB/s), 91.9MiB/s-91.9MiB/s (96.3MB/s-96.3MB/s), io=166MiB (174MB), run=1807-1807msec 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.973 rmmod nvme_tcp 00:24:19.973 rmmod nvme_fabrics 00:24:19.973 rmmod nvme_keyring 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2227282 ']' 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2227282 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 2227282 ']' 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 2227282 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2227282 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2227282' 00:24:19.973 killing process with pid 2227282 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 2227282 00:24:19.973 [2024-05-15 12:25:48.461144] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:19.973 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 2227282 00:24:20.232 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.232 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:20.232 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:20.232 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:20.232 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:20.232 12:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.232 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.232 12:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.857 12:25:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:22.857 00:24:22.857 real 0m16.163s 00:24:22.857 user 0m47.889s 00:24:22.857 sys 0m7.614s 00:24:22.857 12:25:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:22.857 12:25:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.857 ************************************ 00:24:22.857 END TEST nvmf_fio_host 00:24:22.857 ************************************ 00:24:22.857 12:25:50 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:22.857 12:25:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:22.857 12:25:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:22.857 12:25:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:22.857 ************************************ 00:24:22.857 START TEST nvmf_failover 00:24:22.857 ************************************ 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:22.857 * Looking for test storage... 00:24:22.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.857 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.858 12:25:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.858 12:25:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.858 12:25:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.858 12:25:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.858 12:25:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.858 12:25:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:29.418 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:29.418 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:29.418 Found net devices under 0000:af:00.0: cvl_0_0 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:29.418 Found net devices under 0000:af:00.1: cvl_0_1 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.418 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:29.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:24:29.419 00:24:29.419 --- 10.0.0.2 ping statistics --- 00:24:29.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.419 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:24:29.419 00:24:29.419 --- 10.0.0.1 ping statistics --- 00:24:29.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.419 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2232393 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2232393 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2232393 ']' 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:29.419 12:25:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.419 [2024-05-15 12:25:57.878381] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:24:29.419 [2024-05-15 12:25:57.878433] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.419 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.677 [2024-05-15 12:25:57.955509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:29.677 [2024-05-15 12:25:58.028944] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.677 [2024-05-15 12:25:58.028980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.677 [2024-05-15 12:25:58.028990] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.677 [2024-05-15 12:25:58.028998] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.677 [2024-05-15 12:25:58.029006] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.677 [2024-05-15 12:25:58.029117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.677 [2024-05-15 12:25:58.029222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.677 [2024-05-15 12:25:58.029224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.241 12:25:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:30.241 12:25:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:24:30.241 12:25:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.241 12:25:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:30.241 12:25:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:30.241 12:25:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.241 12:25:58 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:30.498 [2024-05-15 12:25:58.889829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.498 12:25:58 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:30.755 Malloc0 00:24:30.755 12:25:59 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:31.012 12:25:59 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:31.012 12:25:59 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.269 [2024-05-15 12:25:59.642876] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:31.269 [2024-05-15 12:25:59.643108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.269 12:25:59 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:31.527 [2024-05-15 12:25:59.823581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:31.527 12:25:59 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:31.527 [2024-05-15 12:26:00.000162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2232801 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2232801 /var/tmp/bdevperf.sock 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2232801 ']' 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:31.527 12:26:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.456 12:26:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:32.456 12:26:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:24:32.456 12:26:00 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:32.713 NVMe0n1 00:24:32.969 12:26:01 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.226 00:24:33.226 12:26:01 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.226 12:26:01 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2233066 00:24:33.226 12:26:01 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:34.160 12:26:02 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.418 [2024-05-15 12:26:02.702472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 [2024-05-15 12:26:02.702637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6dfc0 is same with the state(5) to be set 00:24:34.418 12:26:02 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:37.703 12:26:05 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.703 00:24:37.703 12:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:37.703 [2024-05-15 12:26:06.215950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.215997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.703 [2024-05-15 12:26:06.216653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.216992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.704 [2024-05-15 12:26:06.217068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6eb80 is same with the state(5) to be set 00:24:37.961 12:26:06 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:41.239 12:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.239 [2024-05-15 12:26:09.410824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.239 12:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:42.176 12:26:10 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:42.176 [2024-05-15 12:26:10.603631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 [2024-05-15 12:26:10.603827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5800 is same with the state(5) to be set 00:24:42.176 12:26:10 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2233066 00:24:48.732 0 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2232801 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2232801 ']' 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2232801 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2232801 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2232801' 00:24:48.732 killing process with pid 2232801 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2232801 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2232801 00:24:48.732 12:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:48.732 [2024-05-15 12:26:00.063887] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:24:48.732 [2024-05-15 12:26:00.063957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232801 ] 00:24:48.732 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.732 [2024-05-15 12:26:00.134430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.732 [2024-05-15 12:26:00.204945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.732 Running I/O for 15 seconds... 00:24:48.732 [2024-05-15 12:26:02.703934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.703973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.703999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.732 [2024-05-15 12:26:02.704443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.732 [2024-05-15 12:26:02.704471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.732 [2024-05-15 12:26:02.704500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.732 [2024-05-15 12:26:02.704528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.732 [2024-05-15 12:26:02.704556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.732 [2024-05-15 12:26:02.704585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.732 [2024-05-15 12:26:02.704600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.704976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.704991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.733 [2024-05-15 12:26:02.705062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.733 [2024-05-15 12:26:02.705090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.733 [2024-05-15 12:26:02.705118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.733 [2024-05-15 12:26:02.705146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.733 [2024-05-15 12:26:02.705173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.733 [2024-05-15 12:26:02.705206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.733 [2024-05-15 12:26:02.705234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.733 [2024-05-15 12:26:02.705599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.733 [2024-05-15 12:26:02.705614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.705880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.734 [2024-05-15 12:26:02.705908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.734 [2024-05-15 12:26:02.705935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.734 [2024-05-15 12:26:02.705963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.705978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.734 [2024-05-15 12:26:02.705992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.734 [2024-05-15 12:26:02.706020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.734 [2024-05-15 12:26:02.706048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.734 [2024-05-15 12:26:02.706702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.734 [2024-05-15 12:26:02.706717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.735 [2024-05-15 12:26:02.706953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.706984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.706998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99240 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99248 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99256 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99264 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99272 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99280 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99288 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99296 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99312 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99320 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99336 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99344 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:24:48.735 [2024-05-15 12:26:02.707745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.735 [2024-05-15 12:26:02.707758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.735 [2024-05-15 12:26:02.707769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.735 [2024-05-15 12:26:02.707780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 00:24:48.736 [2024-05-15 12:26:02.707793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.707806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.736 [2024-05-15 12:26:02.707816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.736 [2024-05-15 12:26:02.707827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98576 len:8 PRP1 0x0 PRP2 0x0 00:24:48.736 [2024-05-15 12:26:02.707841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.707854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.736 [2024-05-15 12:26:02.707864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.736 [2024-05-15 12:26:02.707876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98584 len:8 PRP1 0x0 PRP2 0x0 00:24:48.736 [2024-05-15 12:26:02.707889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.707902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.736 [2024-05-15 12:26:02.707912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.736 [2024-05-15 12:26:02.707923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98592 len:8 PRP1 0x0 PRP2 0x0 00:24:48.736 [2024-05-15 12:26:02.707936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.707951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.736 [2024-05-15 12:26:02.707961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.736 [2024-05-15 12:26:02.707973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98600 len:8 PRP1 0x0 PRP2 0x0 00:24:48.736 [2024-05-15 12:26:02.707987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.708000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.736 [2024-05-15 12:26:02.708010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.736 [2024-05-15 12:26:02.708021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98608 len:8 PRP1 0x0 PRP2 0x0 00:24:48.736 [2024-05-15 12:26:02.708033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.708051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.736 [2024-05-15 12:26:02.708062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.736 [2024-05-15 12:26:02.708073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98616 len:8 PRP1 0x0 PRP2 0x0 00:24:48.736 [2024-05-15 12:26:02.708086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.708140] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2541840 was disconnected and freed. reset controller. 00:24:48.736 [2024-05-15 12:26:02.708161] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:48.736 [2024-05-15 12:26:02.708198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.736 [2024-05-15 12:26:02.708213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.708228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.736 [2024-05-15 12:26:02.708241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.708255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.736 [2024-05-15 12:26:02.708269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.708282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.736 [2024-05-15 12:26:02.708295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:02.708308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.736 [2024-05-15 12:26:02.711440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.736 [2024-05-15 12:26:02.711480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2520590 (9): Bad file descriptor 00:24:48.736 [2024-05-15 12:26:02.739962] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:48.736 [2024-05-15 12:26:06.217832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.217875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.217903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.217918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.217933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.217947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.217962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.217975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.217991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.736 [2024-05-15 12:26:06.218340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.736 [2024-05-15 12:26:06.218355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.218977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.218994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.737 [2024-05-15 12:26:06.219282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.737 [2024-05-15 12:26:06.219295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.219981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.219994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.738 [2024-05-15 12:26:06.220394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.738 [2024-05-15 12:26:06.220407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.739 [2024-05-15 12:26:06.220438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.739 [2024-05-15 12:26:06.220466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.220978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.220993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.739 [2024-05-15 12:26:06.221436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.739 [2024-05-15 12:26:06.221450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:06.221464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:06.221479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:06.221492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:06.221523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.740 [2024-05-15 12:26:06.221535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.740 [2024-05-15 12:26:06.221547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45616 len:8 PRP1 0x0 PRP2 0x0 00:24:48.740 [2024-05-15 12:26:06.221562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:06.221618] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26e9d80 was disconnected and freed. reset controller. 00:24:48.740 [2024-05-15 12:26:06.221634] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:48.740 [2024-05-15 12:26:06.221665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.740 [2024-05-15 12:26:06.221679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:06.221693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.740 [2024-05-15 12:26:06.221706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:06.221720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.740 [2024-05-15 12:26:06.221733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:06.221747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.740 [2024-05-15 12:26:06.221761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:06.221773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.740 [2024-05-15 12:26:06.221806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2520590 (9): Bad file descriptor 00:24:48.740 [2024-05-15 12:26:06.224946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.740 [2024-05-15 12:26:06.380338] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:48.740 [2024-05-15 12:26:10.605233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.740 [2024-05-15 12:26:10.605783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.605811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.605839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.605869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.605897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.605924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.605952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.605980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.605995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.606008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.606022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.606036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.606050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.606063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.606078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.606091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.606106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.740 [2024-05-15 12:26:10.606120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.740 [2024-05-15 12:26:10.606137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.606983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.606996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.607011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.607024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.607039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.741 [2024-05-15 12:26:10.607052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.741 [2024-05-15 12:26:10.607067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.742 [2024-05-15 12:26:10.607786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.742 [2024-05-15 12:26:10.607813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.742 [2024-05-15 12:26:10.607842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.742 [2024-05-15 12:26:10.607870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.742 [2024-05-15 12:26:10.607897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.742 [2024-05-15 12:26:10.607928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.742 [2024-05-15 12:26:10.607955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.607983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.607998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.608011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.608026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.608039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.608053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.608067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.608082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.608095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.608110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.608122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.608137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.608150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.608165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.742 [2024-05-15 12:26:10.608179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.742 [2024-05-15 12:26:10.608198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.743 [2024-05-15 12:26:10.608855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.743 [2024-05-15 12:26:10.608906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.743 [2024-05-15 12:26:10.608917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117992 len:8 PRP1 0x0 PRP2 0x0 00:24:48.743 [2024-05-15 12:26:10.608930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.608984] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2543a40 was disconnected and freed. reset controller. 00:24:48.743 [2024-05-15 12:26:10.609000] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:48.743 [2024-05-15 12:26:10.609030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.743 [2024-05-15 12:26:10.609045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.609062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.743 [2024-05-15 12:26:10.609075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.609089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.743 [2024-05-15 12:26:10.609102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.609117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.743 [2024-05-15 12:26:10.609130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.743 [2024-05-15 12:26:10.609143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.743 [2024-05-15 12:26:10.609174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2520590 (9): Bad file descriptor 00:24:48.743 [2024-05-15 12:26:10.612297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.743 [2024-05-15 12:26:10.644538] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:48.743 00:24:48.743 Latency(us) 00:24:48.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.743 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:48.743 Verification LBA range: start 0x0 length 0x4000 00:24:48.743 NVMe0n1 : 15.01 11691.26 45.67 701.39 0.00 10306.22 1238.63 24641.54 00:24:48.743 =================================================================================================================== 00:24:48.743 Total : 11691.26 45.67 701.39 0.00 10306.22 1238.63 24641.54 00:24:48.743 Received shutdown signal, test time was about 15.000000 seconds 00:24:48.743 00:24:48.743 Latency(us) 00:24:48.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.743 =================================================================================================================== 00:24:48.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2235649 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2235649 /var/tmp/bdevperf.sock 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2235649 ']' 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.743 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:48.744 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.744 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:48.744 12:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:49.308 12:26:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:49.308 12:26:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:24:49.308 12:26:17 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:49.565 [2024-05-15 12:26:17.954341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:49.565 12:26:17 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:49.821 [2024-05-15 12:26:18.130789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:49.822 12:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.079 NVMe0n1 00:24:50.079 12:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.643 00:24:50.643 12:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.643 00:24:50.900 12:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:50.901 12:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:50.901 12:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.158 12:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:54.506 12:26:22 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.506 12:26:22 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:54.506 12:26:22 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.506 12:26:22 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2236546 00:24:54.506 12:26:22 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2236546 00:24:55.436 0 00:24:55.436 12:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:55.436 [2024-05-15 12:26:16.994186] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:24:55.437 [2024-05-15 12:26:16.994252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235649 ] 00:24:55.437 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.437 [2024-05-15 12:26:17.065380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.437 [2024-05-15 12:26:17.130014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.437 [2024-05-15 12:26:19.507948] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:55.437 [2024-05-15 12:26:19.507998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.437 [2024-05-15 12:26:19.508018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.437 [2024-05-15 12:26:19.508034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.437 [2024-05-15 12:26:19.508048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.437 [2024-05-15 12:26:19.508063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.437 [2024-05-15 12:26:19.508077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.437 [2024-05-15 12:26:19.508091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.437 [2024-05-15 12:26:19.508107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.437 [2024-05-15 12:26:19.508120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.437 [2024-05-15 12:26:19.508150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.437 [2024-05-15 12:26:19.508172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c8a590 (9): Bad file descriptor 00:24:55.437 [2024-05-15 12:26:19.519639] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.437 Running I/O for 1 seconds... 00:24:55.437 00:24:55.437 Latency(us) 00:24:55.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.437 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:55.437 Verification LBA range: start 0x0 length 0x4000 00:24:55.437 NVMe0n1 : 1.01 11318.19 44.21 0.00 0.00 11262.48 2424.83 24117.25 00:24:55.437 =================================================================================================================== 00:24:55.437 Total : 11318.19 44.21 0.00 0.00 11262.48 2424.83 24117.25 00:24:55.437 12:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.437 12:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:55.694 12:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.694 12:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:55.694 12:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.951 12:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.208 12:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:59.483 12:26:27 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.483 12:26:27 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:59.483 12:26:27 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2235649 00:24:59.483 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2235649 ']' 00:24:59.483 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2235649 00:24:59.484 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:24:59.484 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:59.484 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2235649 00:24:59.484 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:59.484 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:59.484 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2235649' 00:24:59.484 killing process with pid 2235649 00:24:59.484 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2235649 00:24:59.484 12:26:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2235649 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.740 rmmod nvme_tcp 00:24:59.740 rmmod nvme_fabrics 00:24:59.740 rmmod nvme_keyring 00:24:59.740 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2232393 ']' 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2232393 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2232393 ']' 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2232393 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2232393 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2232393' 00:24:59.999 killing process with pid 2232393 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2232393 00:24:59.999 [2024-05-15 12:26:28.331100] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:59.999 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2232393 00:25:00.257 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:00.257 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:00.257 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:00.257 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.257 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:00.257 12:26:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.257 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.257 12:26:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.156 12:26:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:02.156 00:25:02.156 real 0m39.775s 00:25:02.156 user 2m2.639s 00:25:02.156 sys 0m9.975s 00:25:02.156 12:26:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:02.156 12:26:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.156 ************************************ 00:25:02.156 END TEST nvmf_failover 00:25:02.156 ************************************ 00:25:02.414 12:26:30 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:02.414 12:26:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:02.414 12:26:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:02.414 12:26:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:02.414 ************************************ 00:25:02.414 START TEST nvmf_host_discovery 00:25:02.414 ************************************ 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:02.414 * Looking for test storage... 00:25:02.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:02.414 12:26:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:08.963 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:08.963 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:08.963 Found net devices under 0000:af:00.0: cvl_0_0 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:08.963 Found net devices under 0000:af:00.1: cvl_0_1 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.963 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:08.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:25:08.964 00:25:08.964 --- 10.0.0.2 ping statistics --- 00:25:08.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.964 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:25:08.964 12:26:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:25:08.964 00:25:08.964 --- 10.0.0.1 ping statistics --- 00:25:08.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.964 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2241039 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2241039 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2241039 ']' 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.964 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.964 [2024-05-15 12:26:37.083287] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:25:08.964 [2024-05-15 12:26:37.083337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.964 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.964 [2024-05-15 12:26:37.158252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.964 [2024-05-15 12:26:37.231054] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.964 [2024-05-15 12:26:37.231090] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.964 [2024-05-15 12:26:37.231099] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.964 [2024-05-15 12:26:37.231109] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.964 [2024-05-15 12:26:37.231116] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.964 [2024-05-15 12:26:37.231136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.530 [2024-05-15 12:26:37.905780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.530 [2024-05-15 12:26:37.913750] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:09.530 [2024-05-15 12:26:37.913937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.530 null0 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.530 null1 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2241317 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2241317 /tmp/host.sock 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2241317 ']' 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:09.530 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.530 12:26:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:09.530 [2024-05-15 12:26:37.991449] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:25:09.530 [2024-05-15 12:26:37.991492] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2241317 ] 00:25:09.530 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.788 [2024-05-15 12:26:38.061320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.788 [2024-05-15 12:26:38.136037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.353 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.611 12:26:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.611 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.612 [2024-05-15 12:26:39.129116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.612 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:25:10.870 12:26:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:25:11.436 [2024-05-15 12:26:39.855992] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:11.436 [2024-05-15 12:26:39.856017] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:11.436 [2024-05-15 12:26:39.856033] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:11.436 [2024-05-15 12:26:39.944305] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:11.694 [2024-05-15 12:26:40.128232] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:11.694 [2024-05-15 12:26:40.128257] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:11.970 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.253 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:25:12.253 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:12.253 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:12.253 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.254 [2024-05-15 12:26:40.657454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:12.254 [2024-05-15 12:26:40.658671] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:12.254 [2024-05-15 12:26:40.658694] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.254 [2024-05-15 12:26:40.744935] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.254 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.512 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:12.512 12:26:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:25:12.770 [2024-05-15 12:26:41.053344] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:12.770 [2024-05-15 12:26:41.053363] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:12.770 [2024-05-15 12:26:41.053370] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:13.336 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.595 [2024-05-15 12:26:41.921799] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:13.595 [2024-05-15 12:26:41.921821] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.595 [2024-05-15 12:26:41.930016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.595 [2024-05-15 12:26:41.930038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.595 [2024-05-15 12:26:41.930054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.595 [2024-05-15 12:26:41.930067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.595 [2024-05-15 12:26:41.930081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.595 [2024-05-15 12:26:41.930095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.595 [2024-05-15 12:26:41.930110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.595 [2024-05-15 12:26:41.930123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.595 [2024-05-15 12:26:41.930137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5481f0 is same with the state(5) to be set 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.595 [2024-05-15 12:26:41.940028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5481f0 (9): Bad file descriptor 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.595 [2024-05-15 12:26:41.950069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.595 [2024-05-15 12:26:41.950580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.595 [2024-05-15 12:26:41.950985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.595 [2024-05-15 12:26:41.951001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5481f0 with addr=10.0.0.2, port=4420 00:25:13.595 [2024-05-15 12:26:41.951016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5481f0 is same with the state(5) to be set 00:25:13.595 [2024-05-15 12:26:41.951035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5481f0 (9): Bad file descriptor 00:25:13.595 [2024-05-15 12:26:41.951064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:13.595 [2024-05-15 12:26:41.951079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:13.595 [2024-05-15 12:26:41.951093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:13.595 [2024-05-15 12:26:41.951111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.595 [2024-05-15 12:26:41.960130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.595 [2024-05-15 12:26:41.960615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.595 [2024-05-15 12:26:41.961018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.595 [2024-05-15 12:26:41.961032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5481f0 with addr=10.0.0.2, port=4420 00:25:13.595 [2024-05-15 12:26:41.961050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5481f0 is same with the state(5) to be set 00:25:13.595 [2024-05-15 12:26:41.961069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5481f0 (9): Bad file descriptor 00:25:13.595 [2024-05-15 12:26:41.961094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:13.595 [2024-05-15 12:26:41.961107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:13.595 [2024-05-15 12:26:41.961120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:13.595 [2024-05-15 12:26:41.961136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.595 [2024-05-15 12:26:41.970187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.595 [2024-05-15 12:26:41.970620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.595 [2024-05-15 12:26:41.971049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.595 [2024-05-15 12:26:41.971064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5481f0 with addr=10.0.0.2, port=4420 00:25:13.595 [2024-05-15 12:26:41.971078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5481f0 is same with the state(5) to be set 00:25:13.595 [2024-05-15 12:26:41.971095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5481f0 (9): Bad file descriptor 00:25:13.595 [2024-05-15 12:26:41.971112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:13.595 [2024-05-15 12:26:41.971124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:13.595 [2024-05-15 12:26:41.971137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:13.595 [2024-05-15 12:26:41.971164] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:13.595 [2024-05-15 12:26:41.980255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.595 [2024-05-15 12:26:41.981304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.595 12:26:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.595 [2024-05-15 12:26:41.982211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.595 [2024-05-15 12:26:41.982238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5481f0 with addr=10.0.0.2, port=4420 00:25:13.596 [2024-05-15 12:26:41.982253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5481f0 is same with the state(5) to be set 00:25:13.596 [2024-05-15 12:26:41.982280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5481f0 (9): Bad file descriptor 00:25:13.596 [2024-05-15 12:26:41.982310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:13.596 [2024-05-15 12:26:41.982324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:13.596 [2024-05-15 12:26:41.982339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:13.596 [2024-05-15 12:26:41.982358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.596 12:26:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.596 [2024-05-15 12:26:41.990312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.596 [2024-05-15 12:26:41.990648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.596 [2024-05-15 12:26:41.991120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.596 [2024-05-15 12:26:41.991135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5481f0 with addr=10.0.0.2, port=4420 00:25:13.596 [2024-05-15 12:26:41.991149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5481f0 is same with the state(5) to be set 00:25:13.596 [2024-05-15 12:26:41.991167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5481f0 (9): Bad file descriptor 00:25:13.596 [2024-05-15 12:26:41.991202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:13.596 [2024-05-15 12:26:41.991217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:13.596 [2024-05-15 12:26:41.991230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:13.596 [2024-05-15 12:26:41.991246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.596 [2024-05-15 12:26:42.000372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:13.596 [2024-05-15 12:26:42.000856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.596 [2024-05-15 12:26:42.001113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.596 [2024-05-15 12:26:42.001128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5481f0 with addr=10.0.0.2, port=4420 00:25:13.596 [2024-05-15 12:26:42.001141] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5481f0 is same with the state(5) to be set 00:25:13.596 [2024-05-15 12:26:42.001159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5481f0 (9): Bad file descriptor 00:25:13.596 [2024-05-15 12:26:42.001176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:13.596 [2024-05-15 12:26:42.001195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:13.596 [2024-05-15 12:26:42.001208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:13.596 [2024-05-15 12:26:42.001225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.596 [2024-05-15 12:26:42.009268] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:13.596 [2024-05-15 12:26:42.009287] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.596 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.854 12:26:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.795 [2024-05-15 12:26:43.315013] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:14.795 [2024-05-15 12:26:43.315031] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:14.795 [2024-05-15 12:26:43.315046] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:15.052 [2024-05-15 12:26:43.444506] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:15.309 [2024-05-15 12:26:43.711877] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:15.309 [2024-05-15 12:26:43.711904] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.309 request: 00:25:15.309 { 00:25:15.309 "name": "nvme", 00:25:15.309 "trtype": "tcp", 00:25:15.309 "traddr": "10.0.0.2", 00:25:15.309 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:15.309 "adrfam": "ipv4", 00:25:15.309 "trsvcid": "8009", 00:25:15.309 "wait_for_attach": true, 00:25:15.309 "method": "bdev_nvme_start_discovery", 00:25:15.309 "req_id": 1 00:25:15.309 } 00:25:15.309 Got JSON-RPC error response 00:25:15.309 response: 00:25:15.309 { 00:25:15.309 "code": -17, 00:25:15.309 "message": "File exists" 00:25:15.309 } 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.309 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.567 request: 00:25:15.567 { 00:25:15.567 "name": "nvme_second", 00:25:15.567 "trtype": "tcp", 00:25:15.567 "traddr": "10.0.0.2", 00:25:15.567 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:15.567 "adrfam": "ipv4", 00:25:15.567 "trsvcid": "8009", 00:25:15.567 "wait_for_attach": true, 00:25:15.567 "method": "bdev_nvme_start_discovery", 00:25:15.567 "req_id": 1 00:25:15.567 } 00:25:15.567 Got JSON-RPC error response 00:25:15.567 response: 00:25:15.567 { 00:25:15.567 "code": -17, 00:25:15.567 "message": "File exists" 00:25:15.567 } 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.567 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.568 12:26:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.502 [2024-05-15 12:26:44.971387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.502 [2024-05-15 12:26:44.971784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.502 [2024-05-15 12:26:44.971807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x55ee00 with addr=10.0.0.2, port=8010 00:25:16.502 [2024-05-15 12:26:44.971827] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:16.502 [2024-05-15 12:26:44.971839] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:16.502 [2024-05-15 12:26:44.971851] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:17.877 [2024-05-15 12:26:45.974013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-05-15 12:26:45.974425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-05-15 12:26:45.974447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5612f0 with addr=10.0.0.2, port=8010 00:25:17.877 [2024-05-15 12:26:45.974465] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:17.877 [2024-05-15 12:26:45.974476] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:17.877 [2024-05-15 12:26:45.974488] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:18.811 [2024-05-15 12:26:46.975955] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:18.811 request: 00:25:18.811 { 00:25:18.811 "name": "nvme_second", 00:25:18.811 "trtype": "tcp", 00:25:18.811 "traddr": "10.0.0.2", 00:25:18.811 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:18.811 "adrfam": "ipv4", 00:25:18.811 "trsvcid": "8010", 00:25:18.811 "attach_timeout_ms": 3000, 00:25:18.811 "method": "bdev_nvme_start_discovery", 00:25:18.811 "req_id": 1 00:25:18.811 } 00:25:18.811 Got JSON-RPC error response 00:25:18.811 response: 00:25:18.811 { 00:25:18.811 "code": -110, 00:25:18.811 "message": "Connection timed out" 00:25:18.811 } 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:18.811 12:26:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2241317 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.811 rmmod nvme_tcp 00:25:18.811 rmmod nvme_fabrics 00:25:18.811 rmmod nvme_keyring 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2241039 ']' 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2241039 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 2241039 ']' 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 2241039 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2241039 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:18.811 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2241039' 00:25:18.811 killing process with pid 2241039 00:25:18.812 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 2241039 00:25:18.812 [2024-05-15 12:26:47.160242] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:18.812 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 2241039 00:25:19.070 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:19.070 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:19.070 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:19.070 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.070 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.070 12:26:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.070 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.070 12:26:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.974 12:26:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.974 00:25:20.974 real 0m18.706s 00:25:20.974 user 0m22.173s 00:25:20.974 sys 0m6.681s 00:25:20.974 12:26:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:20.974 12:26:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.974 ************************************ 00:25:20.974 END TEST nvmf_host_discovery 00:25:20.974 ************************************ 00:25:20.974 12:26:49 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:20.974 12:26:49 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:20.974 12:26:49 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:20.974 12:26:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.233 ************************************ 00:25:21.233 START TEST nvmf_host_multipath_status 00:25:21.233 ************************************ 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:21.233 * Looking for test storage... 00:25:21.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.233 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.234 12:26:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:27.792 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:27.793 12:26:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:27.793 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:27.793 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:27.793 Found net devices under 0000:af:00.0: cvl_0_0 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:27.793 Found net devices under 0000:af:00.1: cvl_0_1 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:27.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:25:27.793 00:25:27.793 --- 10.0.0.2 ping statistics --- 00:25:27.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.793 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:25:27.793 00:25:27.793 --- 10.0.0.1 ping statistics --- 00:25:27.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.793 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2246632 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2246632 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2246632 ']' 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.793 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:27.794 12:26:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.051 [2024-05-15 12:26:56.359362] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:25:28.051 [2024-05-15 12:26:56.359410] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.051 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.051 [2024-05-15 12:26:56.433669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:28.051 [2024-05-15 12:26:56.505802] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.051 [2024-05-15 12:26:56.505839] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.051 [2024-05-15 12:26:56.505852] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.051 [2024-05-15 12:26:56.505863] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.051 [2024-05-15 12:26:56.505873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.051 [2024-05-15 12:26:56.505926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.051 [2024-05-15 12:26:56.505930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2246632 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:28.982 [2024-05-15 12:26:57.346960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.982 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:29.240 Malloc0 00:25:29.240 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:29.240 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:29.497 12:26:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.754 [2024-05-15 12:26:58.054952] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:29.754 [2024-05-15 12:26:58.055251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:29.754 [2024-05-15 12:26:58.231618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2247066 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2247066 /var/tmp/bdevperf.sock 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2247066 ']' 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:29.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:29.754 12:26:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:30.686 12:26:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:30.686 12:26:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:25:30.686 12:26:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:30.943 12:26:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:31.200 Nvme0n1 00:25:31.200 12:26:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:31.790 Nvme0n1 00:25:31.790 12:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:31.790 12:27:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:33.728 12:27:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:33.728 12:27:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:33.985 12:27:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.985 12:27:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.355 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.356 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.356 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.356 12:27:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.613 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.613 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.613 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.613 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.871 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.871 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.871 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.871 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.871 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.871 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.871 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.871 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.128 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.128 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:36.128 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:36.386 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:36.644 12:27:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:37.577 12:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:37.577 12:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:37.577 12:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.577 12:27:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.577 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.577 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:37.577 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.577 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.835 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.835 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.835 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.835 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.093 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.093 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.093 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.093 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.351 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.351 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.351 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.351 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.351 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.351 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.351 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.351 12:27:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.609 12:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.609 12:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:38.609 12:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:38.867 12:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:38.867 12:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.242 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.500 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.500 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.500 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.500 12:27:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.758 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.758 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.758 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.758 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.758 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.758 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:40.758 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.016 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.016 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.016 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:41.016 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:41.275 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:41.533 12:27:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:42.468 12:27:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:42.468 12:27:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:42.468 12:27:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.468 12:27:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.726 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.726 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:42.726 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.726 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.985 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.985 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.985 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.985 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.985 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.985 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.985 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.985 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.244 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.244 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.244 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.244 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.503 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.503 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:43.503 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.503 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.503 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.503 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:43.503 12:27:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:43.761 12:27:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:44.019 12:27:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:44.954 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:44.954 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:44.954 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.954 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.212 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.212 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:45.212 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.212 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.212 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.212 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.212 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.212 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.470 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.470 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.470 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.470 12:27:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.728 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.728 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:45.728 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.728 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.728 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.728 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:45.728 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.728 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.986 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.986 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:45.986 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:46.244 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.244 12:27:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:47.677 12:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:47.677 12:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:47.677 12:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.677 12:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.677 12:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.677 12:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.677 12:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.677 12:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.677 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.677 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.677 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.677 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.935 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.935 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.935 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.935 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.194 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.194 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:48.194 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.194 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.194 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.194 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.194 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.194 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.452 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.452 12:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:48.711 12:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:48.711 12:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:48.711 12:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:48.969 12:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:49.905 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:49.905 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:49.905 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.905 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.163 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.163 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:50.163 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.163 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.422 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.422 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.422 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.422 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.680 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.680 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.680 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.680 12:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.680 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.680 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.680 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.680 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.939 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.939 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.939 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.939 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.197 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.197 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:51.197 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.198 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:51.456 12:27:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:52.392 12:27:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:52.392 12:27:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:52.392 12:27:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.392 12:27:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.650 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.650 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:52.650 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.650 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.909 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.909 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.909 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.909 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.909 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.909 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.909 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.909 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.168 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.168 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.168 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.168 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.426 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.426 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.426 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.426 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.426 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.426 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:53.426 12:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:53.685 12:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:53.944 12:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:54.879 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:54.879 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:54.879 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.879 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.138 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.138 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:55.138 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.138 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.138 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.138 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.138 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.138 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.397 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.397 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.397 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.397 12:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.655 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.655 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:55.655 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.655 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.914 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.914 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:55.914 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.914 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.914 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.914 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:55.914 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.172 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:56.431 12:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:57.366 12:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:57.366 12:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:57.366 12:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.366 12:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:57.624 12:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.624 12:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:57.624 12:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.624 12:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:57.624 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.624 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:57.624 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.625 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.883 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.883 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.883 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.883 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.141 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.141 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.141 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.141 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.141 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.141 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:58.141 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.141 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2247066 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2247066 ']' 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2247066 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2247066 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2247066' 00:25:58.400 killing process with pid 2247066 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2247066 00:25:58.400 12:27:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2247066 00:25:58.677 Connection closed with partial response: 00:25:58.677 00:25:58.677 00:25:58.677 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2247066 00:25:58.677 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:58.677 [2024-05-15 12:26:58.280985] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:25:58.677 [2024-05-15 12:26:58.281041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247066 ] 00:25:58.677 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.677 [2024-05-15 12:26:58.346787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.677 [2024-05-15 12:26:58.418206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.677 Running I/O for 90 seconds... 00:25:58.677 [2024-05-15 12:27:12.139317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.677 [2024-05-15 12:27:12.139368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.677 [2024-05-15 12:27:12.139410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.677 [2024-05-15 12:27:12.139421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.677 [2024-05-15 12:27:12.139436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.677 [2024-05-15 12:27:12.139446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.677 [2024-05-15 12:27:12.139461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.677 [2024-05-15 12:27:12.139470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.677 [2024-05-15 12:27:12.139484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.677 [2024-05-15 12:27:12.139493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.677 [2024-05-15 12:27:12.139508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.677 [2024-05-15 12:27:12.139517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.677 [2024-05-15 12:27:12.139531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.139985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.139999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.678 [2024-05-15 12:27:12.140397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.678 [2024-05-15 12:27:12.140412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.140420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.140435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.140444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.140459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.140468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.140482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.140491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.140505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.140514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.140529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.140538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.140552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.679 [2024-05-15 12:27:12.140561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.140576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.679 [2024-05-15 12:27:12.140585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.679 [2024-05-15 12:27:12.141659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.679 [2024-05-15 12:27:12.141689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.679 [2024-05-15 12:27:12.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.679 [2024-05-15 12:27:12.141744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.679 [2024-05-15 12:27:12.141771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.141798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.141825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.141852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.141879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.141906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.141932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.141959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.141977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.141988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.679 [2024-05-15 12:27:12.142440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.679 [2024-05-15 12:27:12.142450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:12.142478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:12.142506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:12.142534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:12.142561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:12.142589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:12.142617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:12.142644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:12.142748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:12.142783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:12.142813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:12.142848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:12.142877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:12.142906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:12.142936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:12.142966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:12.142986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:12.142995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.711238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:24.711277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.711301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:24.711312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.711926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.711940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.711955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.711964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.711984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.711993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.680 [2024-05-15 12:27:24.712298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.680 [2024-05-15 12:27:24.712577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.680 [2024-05-15 12:27:24.712587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.712984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.712993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.681 [2024-05-15 12:27:24.713672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.681 [2024-05-15 12:27:24.713682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.682 [2024-05-15 12:27:24.713755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.682 [2024-05-15 12:27:24.713779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.682 [2024-05-15 12:27:24.713803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.713984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.713995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.714010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.714019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.714034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.682 [2024-05-15 12:27:24.714043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.714057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.714066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.714081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.714090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.714104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.714114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.714128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.714137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.714153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.714162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.714177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.714186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.715127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.715143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.715160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.682 [2024-05-15 12:27:24.715170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.715184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.715199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.715214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.715226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.715241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.682 [2024-05-15 12:27:24.715250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:58.682 [2024-05-15 12:27:24.715264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.715553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.715562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.716979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.716990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.717005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.717014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.717029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.717038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.717052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.717062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.717076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.717085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.717099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.683 [2024-05-15 12:27:24.717111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.683 [2024-05-15 12:27:24.717125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.683 [2024-05-15 12:27:24.717134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.717889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.717984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.717999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.684 [2024-05-15 12:27:24.718008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.718022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.718031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.718045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.718055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.684 [2024-05-15 12:27:24.718069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.684 [2024-05-15 12:27:24.718078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.718102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.718626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.718656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.718680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.718705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.718728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.718981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.718990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.719014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.719060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.719084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.719108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.719131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.685 [2024-05-15 12:27:24.719465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.685 [2024-05-15 12:27:24.719560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.685 [2024-05-15 12:27:24.719574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.719583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.719598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.719607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.719622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.719631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.720689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.720716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.720904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.720928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.720990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.720999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.721014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.721022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.721037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.721046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.721070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.721084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.721093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.721108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.721117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.721131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.721140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.721155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.686 [2024-05-15 12:27:24.721164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.723277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.723297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.723315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.723324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.723339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.723349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.723363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.723372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.723387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.723399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.723413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.723422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.723437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.686 [2024-05-15 12:27:24.723446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.686 [2024-05-15 12:27:24.723461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.723871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.723910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.723919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.724297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.724312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.724328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.687 [2024-05-15 12:27:24.724339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.724356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.724365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.724380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.724389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.724403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.724413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.724428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.724438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.687 [2024-05-15 12:27:24.724452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.687 [2024-05-15 12:27:24.724462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.724476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.724485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.724500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.724509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.724524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.724533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.724547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.724556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.724571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.724580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.724595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.724604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.725978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.725988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.726002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.726011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.726027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.726036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.726050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.726059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.726075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.726084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.726099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.726108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.726122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.688 [2024-05-15 12:27:24.726132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.726147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.688 [2024-05-15 12:27:24.726158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.688 [2024-05-15 12:27:24.726172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.726184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.726204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.726214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.726228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.726238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.726254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.726263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.727796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.727822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.727846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.727869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.727893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.727916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.727940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.727966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.727981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.727990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.728154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.728178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.728207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.728230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.728254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.728421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.728444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.728459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.689 [2024-05-15 12:27:24.728468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.730269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.730288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.689 [2024-05-15 12:27:24.730306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.689 [2024-05-15 12:27:24.730315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.730838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.730853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.730862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.731391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.731416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.731442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.731466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.731490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.731514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.731537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.731560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.731584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.731608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.731631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.690 [2024-05-15 12:27:24.731655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.690 [2024-05-15 12:27:24.731678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:58.690 [2024-05-15 12:27:24.731692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.731702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.731725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.731750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.731774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.731798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.731821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.731845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.731868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.731892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.731916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.731930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.731939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.732448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.732473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.732549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.732573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.732597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.732858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.732882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.691 [2024-05-15 12:27:24.732953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.691 [2024-05-15 12:27:24.732967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.691 [2024-05-15 12:27:24.732976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.732991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.733607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.733678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.733774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.733844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.733859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.733868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.735160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.735186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.735217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.735241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.735265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.692 [2024-05-15 12:27:24.735288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.735313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.735340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.692 [2024-05-15 12:27:24.735363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.692 [2024-05-15 12:27:24.735377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.735386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.735410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.735457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.735481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.735671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.735766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.735780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.735789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.737277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.737304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.737328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.737352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.737378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.737402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.737426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.737449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.693 [2024-05-15 12:27:24.737474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.737498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.737513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.737522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.738047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.738063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.693 [2024-05-15 12:27:24.738079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.693 [2024-05-15 12:27:24.738089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.738652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.738785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.738795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.739689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.739707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.739724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-05-15 12:27:24.739734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.739749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.739758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:58.694 [2024-05-15 12:27:24.739773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.694 [2024-05-15 12:27:24.739782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.739806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.739830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.739854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.739877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.739901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.739924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.739948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.739974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.739989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.739998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.740901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.740986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-05-15 12:27:24.740995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.695 [2024-05-15 12:27:24.741010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.695 [2024-05-15 12:27:24.741019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.741971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.741986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.741995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.742009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.742018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.742033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.742042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.742056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.742066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.742904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.742922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.742939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.742948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.742963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.742975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.742989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.742999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.743013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.743022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.743038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.743048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.743062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.743072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.743086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.743095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.743110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.696 [2024-05-15 12:27:24.743119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.743134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.743143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.696 [2024-05-15 12:27:24.743157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.696 [2024-05-15 12:27:24.743166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.743196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.743220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.743243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.743267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.743875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.743901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.743924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.743948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.743972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.743986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.743995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.744019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.744042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.744066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.744090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.744113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.744137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.744163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.744186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.744215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.744875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.744901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.744925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.744949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.744972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.744987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.744996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.745020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.745044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.745068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.745095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.745119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.745142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.745166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.697 [2024-05-15 12:27:24.745189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.745219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:58.697 [2024-05-15 12:27:24.745233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.697 [2024-05-15 12:27:24.745242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.745257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.745266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.745290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.745921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.745937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.745954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.745963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.745977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.745987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.746904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.746989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.698 [2024-05-15 12:27:24.746998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.747013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.747022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:58.698 [2024-05-15 12:27:24.747036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.698 [2024-05-15 12:27:24.747045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.747070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.747094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.747118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.747143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.747166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.747197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.747221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.747244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.747268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.747980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.747996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.748022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.748045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.748068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.748091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.748115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.748140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.748163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.748186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.748215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.748238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.748261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.748284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.748307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.748321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.748330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.749188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.749219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.749242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.749268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.699 [2024-05-15 12:27:24.749291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.749313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.749336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.699 [2024-05-15 12:27:24.749359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.699 [2024-05-15 12:27:24.749373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.749382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.749405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.749428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.749450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.749473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.749496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.749519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.749542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.749566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.749939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.749964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.749978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.749987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.750010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.750034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.750057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.750080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.750103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.750126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.750149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.750172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.750204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.750227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.750250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.750273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.750296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.750319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.750973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.750990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.751015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.751039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.751062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.751085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.751108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.700 [2024-05-15 12:27:24.751133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.751156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.700 [2024-05-15 12:27:24.751179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.700 [2024-05-15 12:27:24.751199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.751208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.751231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.751254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.751277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.751300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.751323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.751345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.751368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.751391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.751416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.751439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.751462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.751476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.751485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.752394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.752441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.752464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.752487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.752555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.752665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.752674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.753356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.753381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.753404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.753427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.753450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.753473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.701 [2024-05-15 12:27:24.753496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.753522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.701 [2024-05-15 12:27:24.753546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:58.701 [2024-05-15 12:27:24.753562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.753572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.753587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.753598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.753612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.753622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.753637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.753648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.753662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.753672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.753687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.753698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.753713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.753723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.753738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.753748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.754675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.754761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.754771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.755425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.755450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.755473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.755496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.755519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.755542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.755565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.702 [2024-05-15 12:27:24.755588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.702 [2024-05-15 12:27:24.755605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.702 [2024-05-15 12:27:24.755614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.755628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.755637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.755651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.755660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.755674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.755682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.755696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.755705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.755719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.755728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.755742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.755751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.755765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.755774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.756521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.756545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.756569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.756686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.756709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.756734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.756757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.756772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.756781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.757518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.757539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.757556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.703 [2024-05-15 12:27:24.757565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.757580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.757589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.757604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.757613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.703 [2024-05-15 12:27:24.757627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.703 [2024-05-15 12:27:24.757637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.757660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.757684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.757707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.757731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.757754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.757802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.757827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.757850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.757874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.757889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.757898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.758553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.758579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.758772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.758843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.758866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.758881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.758890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.759512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.759528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.759544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.759553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.759568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.704 [2024-05-15 12:27:24.759577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.759591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.759600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.759614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.759623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.759637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.759646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.759662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.759671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.704 [2024-05-15 12:27:24.759685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.704 [2024-05-15 12:27:24.759694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.759717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.759740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.759763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.759786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.759808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.759831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.759854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.759877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.759892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.759900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.760685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.760713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.760736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.760759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.760782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.760805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.760828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.760851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.760874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.760897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.760920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.760943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.760966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.760980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.760990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.761014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.761564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.761589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.761613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.761636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.761659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.761682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.705 [2024-05-15 12:27:24.761705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.761728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.761751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:58.705 [2024-05-15 12:27:24.761765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.705 [2024-05-15 12:27:24.761774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.761788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.761797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.761813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.761822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.761836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.761845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.761859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.761882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.761891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.761905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.761914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.761928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.761938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.762476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.762501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.762594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.762620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.762645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.762668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.762736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.762805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.762866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.762874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.763533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.763560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.763585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.763610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.763636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.763661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.763686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.706 [2024-05-15 12:27:24.763711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.706 [2024-05-15 12:27:24.763734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.706 [2024-05-15 12:27:24.763749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.763758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.763772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.763781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.763795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.763804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.763818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.763829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.763843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.763852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.763867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.763877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.764507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.764532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.764556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.764579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.764602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.764625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.764648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.764672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.764696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.764720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.764746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.764769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.764792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.764815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.764838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.764852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.764861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.765634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.765659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.765683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.765706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.765729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.765753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.765778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.765801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.765824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.765847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.765870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.765893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.707 [2024-05-15 12:27:24.765916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.707 [2024-05-15 12:27:24.765930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.707 [2024-05-15 12:27:24.765939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.765953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.765962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.766514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.766539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.766686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.766709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.766780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.766804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.766866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.766877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.767221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.767245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.767268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.708 [2024-05-15 12:27:24.767291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.767314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.767337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.767360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.767383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:58.708 [2024-05-15 12:27:24.767398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.708 [2024-05-15 12:27:24.767406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.767429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.767452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.767477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.767501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.767523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.767546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.767570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.767593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.767607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.767616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.768743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.768769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.768793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.768816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.768838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.768862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.768887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.768910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.768933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.768979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.768993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.769002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.769025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.769047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.769070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.769093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.769117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.769140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.769164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.769187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.769215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.769238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.769261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.709 [2024-05-15 12:27:24.769284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.709 [2024-05-15 12:27:24.769307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:58.709 [2024-05-15 12:27:24.769321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.769330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.769345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.769353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.769987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.770001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.770017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.770027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.770041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.770050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.770064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.770075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.770090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.770099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.770112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.770121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.770136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.770145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.710 [2024-05-15 12:27:24.771731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:58.710 [2024-05-15 12:27:24.771771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.710 [2024-05-15 12:27:24.771781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.771796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.771819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.771828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.771843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.771852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.771866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.771875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.771889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.771898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.771913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.771922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.772613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.772639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.772666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.772690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.772714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.772738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.772761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.772785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.772800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.772808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.711 [2024-05-15 12:27:24.774628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.711 [2024-05-15 12:27:24.774651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.711 [2024-05-15 12:27:24.774665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.712 [2024-05-15 12:27:24.774674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.774696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.774720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.712 [2024-05-15 12:27:24.774743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.712 [2024-05-15 12:27:24.774766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.712 [2024-05-15 12:27:24.774788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.712 [2024-05-15 12:27:24.774811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.774835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.774859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.774883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.712 [2024-05-15 12:27:24.774906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:58.712 [2024-05-15 12:27:24.774928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.774951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.774974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.774988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.774997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.775011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.775020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:58.712 [2024-05-15 12:27:24.775034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.712 [2024-05-15 12:27:24.775043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:58.712 Received shutdown signal, test time was about 26.620578 seconds 00:25:58.712 00:25:58.712 Latency(us) 00:25:58.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.712 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:58.712 Verification LBA range: start 0x0 length 0x4000 00:25:58.712 Nvme0n1 : 26.62 10667.34 41.67 0.00 0.00 11973.81 812.65 3019898.88 00:25:58.712 =================================================================================================================== 00:25:58.712 Total : 10667.34 41.67 0.00 0.00 11973.81 812.65 3019898.88 00:25:58.712 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.970 rmmod nvme_tcp 00:25:58.970 rmmod nvme_fabrics 00:25:58.970 rmmod nvme_keyring 00:25:58.970 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2246632 ']' 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2246632 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2246632 ']' 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2246632 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2246632 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2246632' 00:25:58.971 killing process with pid 2246632 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2246632 00:25:58.971 [2024-05-15 12:27:27.364158] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:58.971 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2246632 00:25:59.229 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:59.229 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:59.229 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:59.229 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.229 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.229 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.229 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.229 12:27:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.160 12:27:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:01.160 00:26:01.160 real 0m40.125s 00:26:01.160 user 1m42.184s 00:26:01.160 sys 0m14.235s 00:26:01.160 12:27:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:01.160 12:27:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.160 ************************************ 00:26:01.160 END TEST nvmf_host_multipath_status 00:26:01.160 ************************************ 00:26:01.420 12:27:29 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:01.420 12:27:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:01.420 12:27:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:01.420 12:27:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:01.420 ************************************ 00:26:01.420 START TEST nvmf_discovery_remove_ifc 00:26:01.420 ************************************ 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:01.420 * Looking for test storage... 00:26:01.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:01.420 12:27:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.980 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.980 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:07.980 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:07.981 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:07.981 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:07.981 Found net devices under 0000:af:00.0: cvl_0_0 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:07.981 Found net devices under 0000:af:00.1: cvl_0_1 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.981 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:08.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:26:08.239 00:26:08.239 --- 10.0.0.2 ping statistics --- 00:26:08.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.239 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:26:08.239 00:26:08.239 --- 10.0.0.1 ping statistics --- 00:26:08.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.239 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2256300 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2256300 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2256300 ']' 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:08.239 12:27:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.239 [2024-05-15 12:27:36.610048] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:26:08.239 [2024-05-15 12:27:36.610094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.239 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.239 [2024-05-15 12:27:36.683326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.239 [2024-05-15 12:27:36.758095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.239 [2024-05-15 12:27:36.758127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.239 [2024-05-15 12:27:36.758137] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.239 [2024-05-15 12:27:36.758146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.239 [2024-05-15 12:27:36.758153] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.239 [2024-05-15 12:27:36.758173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.172 [2024-05-15 12:27:37.489420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.172 [2024-05-15 12:27:37.497392] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:09.172 [2024-05-15 12:27:37.497583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:09.172 null0 00:26:09.172 [2024-05-15 12:27:37.529573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2256497 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2256497 /tmp/host.sock 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2256497 ']' 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:09.172 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:09.172 12:27:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.172 [2024-05-15 12:27:37.596458] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:26:09.172 [2024-05-15 12:27:37.596500] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256497 ] 00:26:09.172 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.172 [2024-05-15 12:27:37.665256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.430 [2024-05-15 12:27:37.740827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.995 12:27:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.364 [2024-05-15 12:27:39.540064] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:11.364 [2024-05-15 12:27:39.540090] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:11.364 [2024-05-15 12:27:39.540105] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:11.364 [2024-05-15 12:27:39.670495] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:11.364 [2024-05-15 12:27:39.770000] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:11.364 [2024-05-15 12:27:39.770040] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:11.365 [2024-05-15 12:27:39.770062] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:11.365 [2024-05-15 12:27:39.770075] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:11.365 [2024-05-15 12:27:39.770094] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.365 [2024-05-15 12:27:39.777858] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11f5860 was disconnected and freed. delete nvme_qpair. 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:11.365 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.622 12:27:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.622 12:27:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:11.622 12:27:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.554 12:27:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.925 12:27:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.857 12:27:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.789 12:27:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.722 [2024-05-15 12:27:45.210917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:16.722 [2024-05-15 12:27:45.210963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.722 [2024-05-15 12:27:45.210980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.722 [2024-05-15 12:27:45.210994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.722 [2024-05-15 12:27:45.211008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.722 [2024-05-15 12:27:45.211023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.722 [2024-05-15 12:27:45.211036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.722 [2024-05-15 12:27:45.211051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.722 [2024-05-15 12:27:45.211064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.722 [2024-05-15 12:27:45.211080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.722 [2024-05-15 12:27:45.211096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.722 [2024-05-15 12:27:45.211110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bc990 is same with the state(5) to be set 00:26:16.722 [2024-05-15 12:27:45.220938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bc990 (9): Bad file descriptor 00:26:16.722 12:27:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.722 12:27:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.722 12:27:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.722 12:27:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.722 12:27:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.722 12:27:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.722 12:27:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.722 [2024-05-15 12:27:45.230980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:18.094 [2024-05-15 12:27:46.247214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:19.026 [2024-05-15 12:27:47.271218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:19.026 [2024-05-15 12:27:47.271264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11bc990 with addr=10.0.0.2, port=4420 00:26:19.026 [2024-05-15 12:27:47.271290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bc990 is same with the state(5) to be set 00:26:19.026 [2024-05-15 12:27:47.271414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bc990 (9): Bad file descriptor 00:26:19.026 [2024-05-15 12:27:47.271454] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.026 [2024-05-15 12:27:47.271494] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:19.026 [2024-05-15 12:27:47.271531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.026 [2024-05-15 12:27:47.271553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.026 [2024-05-15 12:27:47.271574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.026 [2024-05-15 12:27:47.271593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.026 [2024-05-15 12:27:47.271613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.026 [2024-05-15 12:27:47.271631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.026 [2024-05-15 12:27:47.271650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.026 [2024-05-15 12:27:47.271669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.026 [2024-05-15 12:27:47.271688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.026 [2024-05-15 12:27:47.271707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.026 [2024-05-15 12:27:47.271726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:19.026 [2024-05-15 12:27:47.272333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bbe20 (9): Bad file descriptor 00:26:19.026 [2024-05-15 12:27:47.273347] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:19.026 [2024-05-15 12:27:47.273367] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:19.026 12:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.026 12:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.026 12:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.959 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.216 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:20.216 12:27:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.149 [2024-05-15 12:27:49.324992] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:21.149 [2024-05-15 12:27:49.325012] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:21.149 [2024-05-15 12:27:49.325026] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:21.149 [2024-05-15 12:27:49.455415] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:21.149 [2024-05-15 12:27:49.512790] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:21.149 [2024-05-15 12:27:49.512824] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:21.149 [2024-05-15 12:27:49.512841] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:21.149 [2024-05-15 12:27:49.512855] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:21.149 [2024-05-15 12:27:49.512863] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.149 [2024-05-15 12:27:49.521675] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11fff80 was disconnected and freed. delete nvme_qpair. 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2256497 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2256497 ']' 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2256497 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2256497 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2256497' 00:26:21.149 killing process with pid 2256497 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2256497 00:26:21.149 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2256497 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.407 rmmod nvme_tcp 00:26:21.407 rmmod nvme_fabrics 00:26:21.407 rmmod nvme_keyring 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2256300 ']' 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2256300 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2256300 ']' 00:26:21.407 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2256300 00:26:21.408 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:26:21.408 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:21.408 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2256300 00:26:21.408 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:21.408 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:21.408 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2256300' 00:26:21.408 killing process with pid 2256300 00:26:21.408 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2256300 00:26:21.408 [2024-05-15 12:27:49.936713] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:21.408 12:27:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2256300 00:26:21.678 12:27:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:21.678 12:27:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:21.678 12:27:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:21.678 12:27:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.678 12:27:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:21.678 12:27:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.678 12:27:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.678 12:27:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.224 12:27:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:24.224 00:26:24.224 real 0m22.483s 00:26:24.224 user 0m25.360s 00:26:24.224 sys 0m7.155s 00:26:24.224 12:27:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:24.224 12:27:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.224 ************************************ 00:26:24.224 END TEST nvmf_discovery_remove_ifc 00:26:24.224 ************************************ 00:26:24.224 12:27:52 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:24.224 12:27:52 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:24.224 12:27:52 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:24.224 12:27:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:24.224 ************************************ 00:26:24.224 START TEST nvmf_identify_kernel_target 00:26:24.224 ************************************ 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:24.224 * Looking for test storage... 00:26:24.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.224 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.225 12:27:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:30.786 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:30.786 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:30.786 Found net devices under 0000:af:00.0: cvl_0_0 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:30.786 Found net devices under 0000:af:00.1: cvl_0_1 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.786 12:27:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:30.786 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:30.786 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.786 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.786 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:30.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:26:30.787 00:26:30.787 --- 10.0.0.2 ping statistics --- 00:26:30.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.787 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:26:30.787 00:26:30.787 --- 10.0.0.1 ping statistics --- 00:26:30.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.787 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:30.787 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:31.046 12:27:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:34.333 Waiting for block devices as requested 00:26:34.333 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:34.333 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:34.333 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:34.333 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:34.333 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:34.333 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:34.592 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:34.592 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:34.592 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:34.849 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:34.849 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:34.849 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:35.107 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:35.107 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:35.107 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:35.366 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:35.366 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:35.625 No valid GPT data, bailing 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:35.625 12:28:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:35.625 00:26:35.625 Discovery Log Number of Records 2, Generation counter 2 00:26:35.625 =====Discovery Log Entry 0====== 00:26:35.625 trtype: tcp 00:26:35.625 adrfam: ipv4 00:26:35.625 subtype: current discovery subsystem 00:26:35.625 treq: not specified, sq flow control disable supported 00:26:35.625 portid: 1 00:26:35.625 trsvcid: 4420 00:26:35.625 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:35.625 traddr: 10.0.0.1 00:26:35.625 eflags: none 00:26:35.625 sectype: none 00:26:35.625 =====Discovery Log Entry 1====== 00:26:35.625 trtype: tcp 00:26:35.625 adrfam: ipv4 00:26:35.625 subtype: nvme subsystem 00:26:35.625 treq: not specified, sq flow control disable supported 00:26:35.625 portid: 1 00:26:35.625 trsvcid: 4420 00:26:35.625 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:35.625 traddr: 10.0.0.1 00:26:35.625 eflags: none 00:26:35.625 sectype: none 00:26:35.625 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:35.625 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:35.625 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.887 ===================================================== 00:26:35.887 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:35.887 ===================================================== 00:26:35.887 Controller Capabilities/Features 00:26:35.887 ================================ 00:26:35.887 Vendor ID: 0000 00:26:35.887 Subsystem Vendor ID: 0000 00:26:35.887 Serial Number: 805af8f978dd174ce7fa 00:26:35.887 Model Number: Linux 00:26:35.887 Firmware Version: 6.7.0-68 00:26:35.887 Recommended Arb Burst: 0 00:26:35.887 IEEE OUI Identifier: 00 00 00 00:26:35.887 Multi-path I/O 00:26:35.887 May have multiple subsystem ports: No 00:26:35.887 May have multiple controllers: No 00:26:35.887 Associated with SR-IOV VF: No 00:26:35.887 Max Data Transfer Size: Unlimited 00:26:35.887 Max Number of Namespaces: 0 00:26:35.887 Max Number of I/O Queues: 1024 00:26:35.887 NVMe Specification Version (VS): 1.3 00:26:35.887 NVMe Specification Version (Identify): 1.3 00:26:35.887 Maximum Queue Entries: 1024 00:26:35.887 Contiguous Queues Required: No 00:26:35.887 Arbitration Mechanisms Supported 00:26:35.887 Weighted Round Robin: Not Supported 00:26:35.887 Vendor Specific: Not Supported 00:26:35.887 Reset Timeout: 7500 ms 00:26:35.887 Doorbell Stride: 4 bytes 00:26:35.887 NVM Subsystem Reset: Not Supported 00:26:35.887 Command Sets Supported 00:26:35.887 NVM Command Set: Supported 00:26:35.887 Boot Partition: Not Supported 00:26:35.887 Memory Page Size Minimum: 4096 bytes 00:26:35.887 Memory Page Size Maximum: 4096 bytes 00:26:35.887 Persistent Memory Region: Not Supported 00:26:35.887 Optional Asynchronous Events Supported 00:26:35.887 Namespace Attribute Notices: Not Supported 00:26:35.887 Firmware Activation Notices: Not Supported 00:26:35.887 ANA Change Notices: Not Supported 00:26:35.887 PLE Aggregate Log Change Notices: Not Supported 00:26:35.887 LBA Status Info Alert Notices: Not Supported 00:26:35.887 EGE Aggregate Log Change Notices: Not Supported 00:26:35.887 Normal NVM Subsystem Shutdown event: Not Supported 00:26:35.887 Zone Descriptor Change Notices: Not Supported 00:26:35.887 Discovery Log Change Notices: Supported 00:26:35.887 Controller Attributes 00:26:35.887 128-bit Host Identifier: Not Supported 00:26:35.887 Non-Operational Permissive Mode: Not Supported 00:26:35.887 NVM Sets: Not Supported 00:26:35.887 Read Recovery Levels: Not Supported 00:26:35.887 Endurance Groups: Not Supported 00:26:35.887 Predictable Latency Mode: Not Supported 00:26:35.887 Traffic Based Keep ALive: Not Supported 00:26:35.887 Namespace Granularity: Not Supported 00:26:35.887 SQ Associations: Not Supported 00:26:35.887 UUID List: Not Supported 00:26:35.887 Multi-Domain Subsystem: Not Supported 00:26:35.887 Fixed Capacity Management: Not Supported 00:26:35.887 Variable Capacity Management: Not Supported 00:26:35.887 Delete Endurance Group: Not Supported 00:26:35.887 Delete NVM Set: Not Supported 00:26:35.887 Extended LBA Formats Supported: Not Supported 00:26:35.887 Flexible Data Placement Supported: Not Supported 00:26:35.887 00:26:35.887 Controller Memory Buffer Support 00:26:35.887 ================================ 00:26:35.887 Supported: No 00:26:35.887 00:26:35.887 Persistent Memory Region Support 00:26:35.887 ================================ 00:26:35.887 Supported: No 00:26:35.887 00:26:35.887 Admin Command Set Attributes 00:26:35.887 ============================ 00:26:35.887 Security Send/Receive: Not Supported 00:26:35.887 Format NVM: Not Supported 00:26:35.887 Firmware Activate/Download: Not Supported 00:26:35.887 Namespace Management: Not Supported 00:26:35.887 Device Self-Test: Not Supported 00:26:35.887 Directives: Not Supported 00:26:35.887 NVMe-MI: Not Supported 00:26:35.887 Virtualization Management: Not Supported 00:26:35.887 Doorbell Buffer Config: Not Supported 00:26:35.887 Get LBA Status Capability: Not Supported 00:26:35.887 Command & Feature Lockdown Capability: Not Supported 00:26:35.887 Abort Command Limit: 1 00:26:35.887 Async Event Request Limit: 1 00:26:35.887 Number of Firmware Slots: N/A 00:26:35.887 Firmware Slot 1 Read-Only: N/A 00:26:35.887 Firmware Activation Without Reset: N/A 00:26:35.887 Multiple Update Detection Support: N/A 00:26:35.887 Firmware Update Granularity: No Information Provided 00:26:35.887 Per-Namespace SMART Log: No 00:26:35.887 Asymmetric Namespace Access Log Page: Not Supported 00:26:35.887 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:35.887 Command Effects Log Page: Not Supported 00:26:35.887 Get Log Page Extended Data: Supported 00:26:35.887 Telemetry Log Pages: Not Supported 00:26:35.887 Persistent Event Log Pages: Not Supported 00:26:35.887 Supported Log Pages Log Page: May Support 00:26:35.887 Commands Supported & Effects Log Page: Not Supported 00:26:35.887 Feature Identifiers & Effects Log Page:May Support 00:26:35.887 NVMe-MI Commands & Effects Log Page: May Support 00:26:35.887 Data Area 4 for Telemetry Log: Not Supported 00:26:35.887 Error Log Page Entries Supported: 1 00:26:35.887 Keep Alive: Not Supported 00:26:35.887 00:26:35.887 NVM Command Set Attributes 00:26:35.887 ========================== 00:26:35.887 Submission Queue Entry Size 00:26:35.887 Max: 1 00:26:35.887 Min: 1 00:26:35.887 Completion Queue Entry Size 00:26:35.887 Max: 1 00:26:35.887 Min: 1 00:26:35.887 Number of Namespaces: 0 00:26:35.887 Compare Command: Not Supported 00:26:35.887 Write Uncorrectable Command: Not Supported 00:26:35.887 Dataset Management Command: Not Supported 00:26:35.887 Write Zeroes Command: Not Supported 00:26:35.887 Set Features Save Field: Not Supported 00:26:35.887 Reservations: Not Supported 00:26:35.887 Timestamp: Not Supported 00:26:35.887 Copy: Not Supported 00:26:35.887 Volatile Write Cache: Not Present 00:26:35.887 Atomic Write Unit (Normal): 1 00:26:35.887 Atomic Write Unit (PFail): 1 00:26:35.887 Atomic Compare & Write Unit: 1 00:26:35.887 Fused Compare & Write: Not Supported 00:26:35.887 Scatter-Gather List 00:26:35.887 SGL Command Set: Supported 00:26:35.887 SGL Keyed: Not Supported 00:26:35.887 SGL Bit Bucket Descriptor: Not Supported 00:26:35.887 SGL Metadata Pointer: Not Supported 00:26:35.887 Oversized SGL: Not Supported 00:26:35.887 SGL Metadata Address: Not Supported 00:26:35.887 SGL Offset: Supported 00:26:35.887 Transport SGL Data Block: Not Supported 00:26:35.887 Replay Protected Memory Block: Not Supported 00:26:35.887 00:26:35.887 Firmware Slot Information 00:26:35.887 ========================= 00:26:35.887 Active slot: 0 00:26:35.887 00:26:35.887 00:26:35.887 Error Log 00:26:35.887 ========= 00:26:35.887 00:26:35.887 Active Namespaces 00:26:35.887 ================= 00:26:35.887 Discovery Log Page 00:26:35.887 ================== 00:26:35.887 Generation Counter: 2 00:26:35.887 Number of Records: 2 00:26:35.887 Record Format: 0 00:26:35.887 00:26:35.887 Discovery Log Entry 0 00:26:35.887 ---------------------- 00:26:35.887 Transport Type: 3 (TCP) 00:26:35.887 Address Family: 1 (IPv4) 00:26:35.887 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:35.887 Entry Flags: 00:26:35.887 Duplicate Returned Information: 0 00:26:35.887 Explicit Persistent Connection Support for Discovery: 0 00:26:35.887 Transport Requirements: 00:26:35.887 Secure Channel: Not Specified 00:26:35.887 Port ID: 1 (0x0001) 00:26:35.887 Controller ID: 65535 (0xffff) 00:26:35.887 Admin Max SQ Size: 32 00:26:35.887 Transport Service Identifier: 4420 00:26:35.887 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:35.887 Transport Address: 10.0.0.1 00:26:35.887 Discovery Log Entry 1 00:26:35.887 ---------------------- 00:26:35.887 Transport Type: 3 (TCP) 00:26:35.887 Address Family: 1 (IPv4) 00:26:35.887 Subsystem Type: 2 (NVM Subsystem) 00:26:35.887 Entry Flags: 00:26:35.887 Duplicate Returned Information: 0 00:26:35.887 Explicit Persistent Connection Support for Discovery: 0 00:26:35.887 Transport Requirements: 00:26:35.887 Secure Channel: Not Specified 00:26:35.887 Port ID: 1 (0x0001) 00:26:35.887 Controller ID: 65535 (0xffff) 00:26:35.887 Admin Max SQ Size: 32 00:26:35.887 Transport Service Identifier: 4420 00:26:35.887 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:35.887 Transport Address: 10.0.0.1 00:26:35.888 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:35.888 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.888 get_feature(0x01) failed 00:26:35.888 get_feature(0x02) failed 00:26:35.888 get_feature(0x04) failed 00:26:35.888 ===================================================== 00:26:35.888 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:35.888 ===================================================== 00:26:35.888 Controller Capabilities/Features 00:26:35.888 ================================ 00:26:35.888 Vendor ID: 0000 00:26:35.888 Subsystem Vendor ID: 0000 00:26:35.888 Serial Number: 2c4721e532f6dc079019 00:26:35.888 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:35.888 Firmware Version: 6.7.0-68 00:26:35.888 Recommended Arb Burst: 6 00:26:35.888 IEEE OUI Identifier: 00 00 00 00:26:35.888 Multi-path I/O 00:26:35.888 May have multiple subsystem ports: Yes 00:26:35.888 May have multiple controllers: Yes 00:26:35.888 Associated with SR-IOV VF: No 00:26:35.888 Max Data Transfer Size: Unlimited 00:26:35.888 Max Number of Namespaces: 1024 00:26:35.888 Max Number of I/O Queues: 128 00:26:35.888 NVMe Specification Version (VS): 1.3 00:26:35.888 NVMe Specification Version (Identify): 1.3 00:26:35.888 Maximum Queue Entries: 1024 00:26:35.888 Contiguous Queues Required: No 00:26:35.888 Arbitration Mechanisms Supported 00:26:35.888 Weighted Round Robin: Not Supported 00:26:35.888 Vendor Specific: Not Supported 00:26:35.888 Reset Timeout: 7500 ms 00:26:35.888 Doorbell Stride: 4 bytes 00:26:35.888 NVM Subsystem Reset: Not Supported 00:26:35.888 Command Sets Supported 00:26:35.888 NVM Command Set: Supported 00:26:35.888 Boot Partition: Not Supported 00:26:35.888 Memory Page Size Minimum: 4096 bytes 00:26:35.888 Memory Page Size Maximum: 4096 bytes 00:26:35.888 Persistent Memory Region: Not Supported 00:26:35.888 Optional Asynchronous Events Supported 00:26:35.888 Namespace Attribute Notices: Supported 00:26:35.888 Firmware Activation Notices: Not Supported 00:26:35.888 ANA Change Notices: Supported 00:26:35.888 PLE Aggregate Log Change Notices: Not Supported 00:26:35.888 LBA Status Info Alert Notices: Not Supported 00:26:35.888 EGE Aggregate Log Change Notices: Not Supported 00:26:35.888 Normal NVM Subsystem Shutdown event: Not Supported 00:26:35.888 Zone Descriptor Change Notices: Not Supported 00:26:35.888 Discovery Log Change Notices: Not Supported 00:26:35.888 Controller Attributes 00:26:35.888 128-bit Host Identifier: Supported 00:26:35.888 Non-Operational Permissive Mode: Not Supported 00:26:35.888 NVM Sets: Not Supported 00:26:35.888 Read Recovery Levels: Not Supported 00:26:35.888 Endurance Groups: Not Supported 00:26:35.888 Predictable Latency Mode: Not Supported 00:26:35.888 Traffic Based Keep ALive: Supported 00:26:35.888 Namespace Granularity: Not Supported 00:26:35.888 SQ Associations: Not Supported 00:26:35.888 UUID List: Not Supported 00:26:35.888 Multi-Domain Subsystem: Not Supported 00:26:35.888 Fixed Capacity Management: Not Supported 00:26:35.888 Variable Capacity Management: Not Supported 00:26:35.888 Delete Endurance Group: Not Supported 00:26:35.888 Delete NVM Set: Not Supported 00:26:35.888 Extended LBA Formats Supported: Not Supported 00:26:35.888 Flexible Data Placement Supported: Not Supported 00:26:35.888 00:26:35.888 Controller Memory Buffer Support 00:26:35.888 ================================ 00:26:35.888 Supported: No 00:26:35.888 00:26:35.888 Persistent Memory Region Support 00:26:35.888 ================================ 00:26:35.888 Supported: No 00:26:35.888 00:26:35.888 Admin Command Set Attributes 00:26:35.888 ============================ 00:26:35.888 Security Send/Receive: Not Supported 00:26:35.888 Format NVM: Not Supported 00:26:35.888 Firmware Activate/Download: Not Supported 00:26:35.888 Namespace Management: Not Supported 00:26:35.888 Device Self-Test: Not Supported 00:26:35.888 Directives: Not Supported 00:26:35.888 NVMe-MI: Not Supported 00:26:35.888 Virtualization Management: Not Supported 00:26:35.888 Doorbell Buffer Config: Not Supported 00:26:35.888 Get LBA Status Capability: Not Supported 00:26:35.888 Command & Feature Lockdown Capability: Not Supported 00:26:35.888 Abort Command Limit: 4 00:26:35.888 Async Event Request Limit: 4 00:26:35.888 Number of Firmware Slots: N/A 00:26:35.888 Firmware Slot 1 Read-Only: N/A 00:26:35.888 Firmware Activation Without Reset: N/A 00:26:35.888 Multiple Update Detection Support: N/A 00:26:35.888 Firmware Update Granularity: No Information Provided 00:26:35.888 Per-Namespace SMART Log: Yes 00:26:35.888 Asymmetric Namespace Access Log Page: Supported 00:26:35.888 ANA Transition Time : 10 sec 00:26:35.888 00:26:35.888 Asymmetric Namespace Access Capabilities 00:26:35.888 ANA Optimized State : Supported 00:26:35.888 ANA Non-Optimized State : Supported 00:26:35.888 ANA Inaccessible State : Supported 00:26:35.888 ANA Persistent Loss State : Supported 00:26:35.888 ANA Change State : Supported 00:26:35.888 ANAGRPID is not changed : No 00:26:35.888 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:35.888 00:26:35.888 ANA Group Identifier Maximum : 128 00:26:35.888 Number of ANA Group Identifiers : 128 00:26:35.888 Max Number of Allowed Namespaces : 1024 00:26:35.888 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:35.888 Command Effects Log Page: Supported 00:26:35.888 Get Log Page Extended Data: Supported 00:26:35.888 Telemetry Log Pages: Not Supported 00:26:35.888 Persistent Event Log Pages: Not Supported 00:26:35.888 Supported Log Pages Log Page: May Support 00:26:35.888 Commands Supported & Effects Log Page: Not Supported 00:26:35.888 Feature Identifiers & Effects Log Page:May Support 00:26:35.888 NVMe-MI Commands & Effects Log Page: May Support 00:26:35.888 Data Area 4 for Telemetry Log: Not Supported 00:26:35.888 Error Log Page Entries Supported: 128 00:26:35.888 Keep Alive: Supported 00:26:35.888 Keep Alive Granularity: 1000 ms 00:26:35.888 00:26:35.888 NVM Command Set Attributes 00:26:35.888 ========================== 00:26:35.888 Submission Queue Entry Size 00:26:35.888 Max: 64 00:26:35.888 Min: 64 00:26:35.888 Completion Queue Entry Size 00:26:35.888 Max: 16 00:26:35.888 Min: 16 00:26:35.888 Number of Namespaces: 1024 00:26:35.888 Compare Command: Not Supported 00:26:35.888 Write Uncorrectable Command: Not Supported 00:26:35.888 Dataset Management Command: Supported 00:26:35.888 Write Zeroes Command: Supported 00:26:35.888 Set Features Save Field: Not Supported 00:26:35.888 Reservations: Not Supported 00:26:35.888 Timestamp: Not Supported 00:26:35.888 Copy: Not Supported 00:26:35.888 Volatile Write Cache: Present 00:26:35.888 Atomic Write Unit (Normal): 1 00:26:35.888 Atomic Write Unit (PFail): 1 00:26:35.888 Atomic Compare & Write Unit: 1 00:26:35.888 Fused Compare & Write: Not Supported 00:26:35.888 Scatter-Gather List 00:26:35.888 SGL Command Set: Supported 00:26:35.888 SGL Keyed: Not Supported 00:26:35.888 SGL Bit Bucket Descriptor: Not Supported 00:26:35.888 SGL Metadata Pointer: Not Supported 00:26:35.888 Oversized SGL: Not Supported 00:26:35.888 SGL Metadata Address: Not Supported 00:26:35.888 SGL Offset: Supported 00:26:35.888 Transport SGL Data Block: Not Supported 00:26:35.888 Replay Protected Memory Block: Not Supported 00:26:35.888 00:26:35.888 Firmware Slot Information 00:26:35.888 ========================= 00:26:35.888 Active slot: 0 00:26:35.888 00:26:35.888 Asymmetric Namespace Access 00:26:35.888 =========================== 00:26:35.888 Change Count : 0 00:26:35.888 Number of ANA Group Descriptors : 1 00:26:35.888 ANA Group Descriptor : 0 00:26:35.888 ANA Group ID : 1 00:26:35.888 Number of NSID Values : 1 00:26:35.888 Change Count : 0 00:26:35.888 ANA State : 1 00:26:35.888 Namespace Identifier : 1 00:26:35.888 00:26:35.888 Commands Supported and Effects 00:26:35.888 ============================== 00:26:35.888 Admin Commands 00:26:35.888 -------------- 00:26:35.888 Get Log Page (02h): Supported 00:26:35.888 Identify (06h): Supported 00:26:35.888 Abort (08h): Supported 00:26:35.888 Set Features (09h): Supported 00:26:35.888 Get Features (0Ah): Supported 00:26:35.888 Asynchronous Event Request (0Ch): Supported 00:26:35.888 Keep Alive (18h): Supported 00:26:35.888 I/O Commands 00:26:35.888 ------------ 00:26:35.888 Flush (00h): Supported 00:26:35.888 Write (01h): Supported LBA-Change 00:26:35.888 Read (02h): Supported 00:26:35.888 Write Zeroes (08h): Supported LBA-Change 00:26:35.888 Dataset Management (09h): Supported 00:26:35.888 00:26:35.888 Error Log 00:26:35.888 ========= 00:26:35.888 Entry: 0 00:26:35.888 Error Count: 0x3 00:26:35.888 Submission Queue Id: 0x0 00:26:35.888 Command Id: 0x5 00:26:35.888 Phase Bit: 0 00:26:35.888 Status Code: 0x2 00:26:35.888 Status Code Type: 0x0 00:26:35.888 Do Not Retry: 1 00:26:35.888 Error Location: 0x28 00:26:35.888 LBA: 0x0 00:26:35.888 Namespace: 0x0 00:26:35.889 Vendor Log Page: 0x0 00:26:35.889 ----------- 00:26:35.889 Entry: 1 00:26:35.889 Error Count: 0x2 00:26:35.889 Submission Queue Id: 0x0 00:26:35.889 Command Id: 0x5 00:26:35.889 Phase Bit: 0 00:26:35.889 Status Code: 0x2 00:26:35.889 Status Code Type: 0x0 00:26:35.889 Do Not Retry: 1 00:26:35.889 Error Location: 0x28 00:26:35.889 LBA: 0x0 00:26:35.889 Namespace: 0x0 00:26:35.889 Vendor Log Page: 0x0 00:26:35.889 ----------- 00:26:35.889 Entry: 2 00:26:35.889 Error Count: 0x1 00:26:35.889 Submission Queue Id: 0x0 00:26:35.889 Command Id: 0x4 00:26:35.889 Phase Bit: 0 00:26:35.889 Status Code: 0x2 00:26:35.889 Status Code Type: 0x0 00:26:35.889 Do Not Retry: 1 00:26:35.889 Error Location: 0x28 00:26:35.889 LBA: 0x0 00:26:35.889 Namespace: 0x0 00:26:35.889 Vendor Log Page: 0x0 00:26:35.889 00:26:35.889 Number of Queues 00:26:35.889 ================ 00:26:35.889 Number of I/O Submission Queues: 128 00:26:35.889 Number of I/O Completion Queues: 128 00:26:35.889 00:26:35.889 ZNS Specific Controller Data 00:26:35.889 ============================ 00:26:35.889 Zone Append Size Limit: 0 00:26:35.889 00:26:35.889 00:26:35.889 Active Namespaces 00:26:35.889 ================= 00:26:35.889 get_feature(0x05) failed 00:26:35.889 Namespace ID:1 00:26:35.889 Command Set Identifier: NVM (00h) 00:26:35.889 Deallocate: Supported 00:26:35.889 Deallocated/Unwritten Error: Not Supported 00:26:35.889 Deallocated Read Value: Unknown 00:26:35.889 Deallocate in Write Zeroes: Not Supported 00:26:35.889 Deallocated Guard Field: 0xFFFF 00:26:35.889 Flush: Supported 00:26:35.889 Reservation: Not Supported 00:26:35.889 Namespace Sharing Capabilities: Multiple Controllers 00:26:35.889 Size (in LBAs): 3125627568 (1490GiB) 00:26:35.889 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:35.889 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:35.889 UUID: c0a93212-b0ff-4070-aba2-8763475b76f9 00:26:35.889 Thin Provisioning: Not Supported 00:26:35.889 Per-NS Atomic Units: Yes 00:26:35.889 Atomic Boundary Size (Normal): 0 00:26:35.889 Atomic Boundary Size (PFail): 0 00:26:35.889 Atomic Boundary Offset: 0 00:26:35.889 NGUID/EUI64 Never Reused: No 00:26:35.889 ANA group ID: 1 00:26:35.889 Namespace Write Protected: No 00:26:35.889 Number of LBA Formats: 1 00:26:35.889 Current LBA Format: LBA Format #00 00:26:35.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:35.889 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:35.889 rmmod nvme_tcp 00:26:35.889 rmmod nvme_fabrics 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.889 12:28:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.425 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.425 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:38.426 12:28:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:40.973 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:40.973 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:40.973 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:40.973 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:40.973 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:40.973 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:40.973 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:40.973 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:40.973 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:41.231 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:41.231 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:41.231 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:41.231 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:41.231 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:41.231 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:41.231 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:43.134 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:26:43.134 00:26:43.134 real 0m18.949s 00:26:43.134 user 0m4.428s 00:26:43.134 sys 0m10.059s 00:26:43.134 12:28:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:43.134 12:28:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:43.134 ************************************ 00:26:43.134 END TEST nvmf_identify_kernel_target 00:26:43.134 ************************************ 00:26:43.134 12:28:11 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:43.134 12:28:11 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:43.134 12:28:11 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:43.134 12:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:43.134 ************************************ 00:26:43.134 START TEST nvmf_auth_host 00:26:43.134 ************************************ 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:43.134 * Looking for test storage... 00:26:43.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:43.134 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:43.135 12:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:49.733 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:49.733 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:49.733 Found net devices under 0000:af:00.0: cvl_0_0 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:49.733 Found net devices under 0000:af:00.1: cvl_0_1 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.733 12:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.733 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.733 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.733 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:26:49.733 00:26:49.733 --- 10.0.0.2 ping statistics --- 00:26:49.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.733 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:49.733 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:26:49.733 00:26:49.734 --- 10.0.0.1 ping statistics --- 00:26:49.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.734 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2269096 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2269096 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2269096 ']' 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:49.734 12:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3990b0f7d613e7ae8b6fcf42569eab8e 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Iqx 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3990b0f7d613e7ae8b6fcf42569eab8e 0 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3990b0f7d613e7ae8b6fcf42569eab8e 0 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3990b0f7d613e7ae8b6fcf42569eab8e 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Iqx 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Iqx 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Iqx 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=22e7adf8965b01e3fef222061393775d4cf9a684b8e04a2babafcdbd0ec45ff7 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tfj 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 22e7adf8965b01e3fef222061393775d4cf9a684b8e04a2babafcdbd0ec45ff7 3 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 22e7adf8965b01e3fef222061393775d4cf9a684b8e04a2babafcdbd0ec45ff7 3 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=22e7adf8965b01e3fef222061393775d4cf9a684b8e04a2babafcdbd0ec45ff7 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tfj 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tfj 00:26:50.671 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.tfj 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0a66b0468d2d0953889a8644d3f249174d028dbcc456142a 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.P0k 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0a66b0468d2d0953889a8644d3f249174d028dbcc456142a 0 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0a66b0468d2d0953889a8644d3f249174d028dbcc456142a 0 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0a66b0468d2d0953889a8644d3f249174d028dbcc456142a 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.P0k 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.P0k 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.P0k 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=174ee2990ec28c401de0ebcda7b4a1ff7487943041a049b1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Niz 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 174ee2990ec28c401de0ebcda7b4a1ff7487943041a049b1 2 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 174ee2990ec28c401de0ebcda7b4a1ff7487943041a049b1 2 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=174ee2990ec28c401de0ebcda7b4a1ff7487943041a049b1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Niz 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Niz 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Niz 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=546f70e64b3444699d3493a7d5a068f7 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iiV 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 546f70e64b3444699d3493a7d5a068f7 1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 546f70e64b3444699d3493a7d5a068f7 1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=546f70e64b3444699d3493a7d5a068f7 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iiV 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iiV 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.iiV 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d41d49e14aeb283c6cdac551ad16e157 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kP3 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d41d49e14aeb283c6cdac551ad16e157 1 00:26:50.930 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d41d49e14aeb283c6cdac551ad16e157 1 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d41d49e14aeb283c6cdac551ad16e157 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kP3 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kP3 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.kP3 00:26:50.931 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=476f12b1e72d3d927522ed2d2a781326054defa43bc44908 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4tF 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 476f12b1e72d3d927522ed2d2a781326054defa43bc44908 2 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 476f12b1e72d3d927522ed2d2a781326054defa43bc44908 2 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=476f12b1e72d3d927522ed2d2a781326054defa43bc44908 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4tF 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4tF 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4tF 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=addf579634daec285772f2a12fed716f 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ENl 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key addf579634daec285772f2a12fed716f 0 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 addf579634daec285772f2a12fed716f 0 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=addf579634daec285772f2a12fed716f 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ENl 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ENl 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ENl 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fa1907abd9331e2f0f8dbf9c27e8c620d1eaeab7914be25ef048a761a35adb39 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.v8i 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fa1907abd9331e2f0f8dbf9c27e8c620d1eaeab7914be25ef048a761a35adb39 3 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fa1907abd9331e2f0f8dbf9c27e8c620d1eaeab7914be25ef048a761a35adb39 3 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fa1907abd9331e2f0f8dbf9c27e8c620d1eaeab7914be25ef048a761a35adb39 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.v8i 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.v8i 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.v8i 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2269096 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2269096 ']' 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.189 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:51.190 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.190 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:51.190 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Iqx 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.tfj ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tfj 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.P0k 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Niz ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Niz 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.iiV 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.kP3 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kP3 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4tF 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ENl ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ENl 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.v8i 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:51.447 12:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:54.724 Waiting for block devices as requested 00:26:54.724 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:54.724 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:54.724 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:54.724 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:54.724 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:54.724 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:54.724 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:54.982 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:54.982 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:54.982 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:54.982 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:55.239 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:55.239 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:55.239 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:55.497 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:55.497 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:55.497 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:56.432 No valid GPT data, bailing 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:56.432 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:56.433 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:56.433 00:26:56.433 Discovery Log Number of Records 2, Generation counter 2 00:26:56.433 =====Discovery Log Entry 0====== 00:26:56.433 trtype: tcp 00:26:56.433 adrfam: ipv4 00:26:56.433 subtype: current discovery subsystem 00:26:56.433 treq: not specified, sq flow control disable supported 00:26:56.433 portid: 1 00:26:56.433 trsvcid: 4420 00:26:56.433 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:56.433 traddr: 10.0.0.1 00:26:56.433 eflags: none 00:26:56.433 sectype: none 00:26:56.433 =====Discovery Log Entry 1====== 00:26:56.433 trtype: tcp 00:26:56.433 adrfam: ipv4 00:26:56.433 subtype: nvme subsystem 00:26:56.433 treq: not specified, sq flow control disable supported 00:26:56.433 portid: 1 00:26:56.433 trsvcid: 4420 00:26:56.433 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:56.433 traddr: 10.0.0.1 00:26:56.433 eflags: none 00:26:56.433 sectype: none 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:26:56.691 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.692 12:28:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.692 nvme0n1 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.692 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.950 nvme0n1 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:26:56.950 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.951 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.209 nvme0n1 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.209 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.467 nvme0n1 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.468 12:28:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.726 nvme0n1 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.727 nvme0n1 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.727 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.985 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.986 nvme0n1 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.986 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.244 nvme0n1 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.244 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.503 nvme0n1 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.503 12:28:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.503 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.503 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.503 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.503 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.503 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.762 nvme0n1 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.762 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.021 nvme0n1 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.021 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.022 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.280 nvme0n1 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.280 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.539 12:28:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.797 nvme0n1 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.797 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.798 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.798 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.798 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.798 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.798 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.798 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:59.798 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.056 nvme0n1 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.056 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.314 nvme0n1 00:27:00.314 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.314 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.315 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.315 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.315 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.315 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.315 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.315 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.315 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.315 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.573 12:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.573 nvme0n1 00:27:00.573 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.573 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.573 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.573 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.573 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.831 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.090 nvme0n1 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.090 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.348 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.606 nvme0n1 00:27:01.606 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.606 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.606 12:28:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.606 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.606 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.606 12:28:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.606 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.173 nvme0n1 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:02.173 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.174 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.432 nvme0n1 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.432 12:28:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.000 nvme0n1 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.000 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.569 nvme0n1 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.569 12:28:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.569 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.200 nvme0n1 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.200 12:28:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.201 12:28:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.201 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.201 12:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.768 nvme0n1 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.768 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.335 nvme0n1 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.335 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.593 12:28:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.159 nvme0n1 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.159 nvme0n1 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.159 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.418 nvme0n1 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.418 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.419 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.419 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:06.419 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.419 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:06.677 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.678 12:28:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 nvme0n1 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.678 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.937 nvme0n1 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.937 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.196 nvme0n1 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.196 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.455 nvme0n1 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.455 12:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.714 nvme0n1 00:27:07.714 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.714 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.714 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.714 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.714 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.714 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.714 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.714 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.715 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.974 nvme0n1 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.974 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.232 nvme0n1 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.232 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.233 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.233 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.233 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.492 nvme0n1 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.492 12:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.750 nvme0n1 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:08.750 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.751 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.009 nvme0n1 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.009 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.267 nvme0n1 00:27:09.267 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.267 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.267 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.267 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.267 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.526 12:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.785 nvme0n1 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.785 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.044 nvme0n1 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.044 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.610 nvme0n1 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.610 12:28:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.869 nvme0n1 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.869 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.126 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.384 nvme0n1 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.384 12:28:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.948 nvme0n1 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.948 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.949 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.206 nvme0n1 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.206 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:12.207 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:12.207 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.207 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.207 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:12.207 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:12.207 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.465 12:28:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.031 nvme0n1 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.031 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.597 nvme0n1 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.597 12:28:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.597 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.164 nvme0n1 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.164 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.165 12:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.165 12:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.165 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.165 12:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.730 nvme0n1 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.730 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.988 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.554 nvme0n1 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.554 12:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.554 nvme0n1 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.554 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.813 nvme0n1 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.813 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.072 nvme0n1 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.072 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.330 nvme0n1 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:16.330 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.331 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 nvme0n1 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.589 12:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.589 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.847 nvme0n1 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.847 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.105 nvme0n1 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.105 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.106 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.106 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.106 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.364 nvme0n1 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.364 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.638 nvme0n1 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.638 12:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:17.638 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.638 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:17.638 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.638 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.638 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.638 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.638 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.638 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.639 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.909 nvme0n1 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.909 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.910 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 nvme0n1 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.168 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 nvme0n1 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.427 12:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.686 nvme0n1 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.686 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.945 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.945 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.945 nvme0n1 00:27:18.945 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.945 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.946 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.946 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.946 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.946 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.204 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.205 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.205 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.205 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.205 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.205 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.205 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.463 nvme0n1 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.463 12:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.720 nvme0n1 00:27:19.720 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.720 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.720 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.720 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.720 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.721 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.979 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.238 nvme0n1 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.238 12:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 nvme0n1 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.806 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.065 nvme0n1 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:21.065 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.324 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.582 nvme0n1 00:27:21.582 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.582 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.582 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.582 12:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.582 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.582 12:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk5MGIwZjdkNjEzZTdhZThiNmZjZjQyNTY5ZWFiOGV0BKP1: 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: ]] 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjJlN2FkZjg5NjViMDFlM2ZlZjIyMjA2MTM5Mzc3NWQ0Y2Y5YTY4NGI4ZTA0YTJiYWJhZmNkYmQwZWM0NWZmN42n2cQ=: 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.582 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.149 nvme0n1 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.149 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.408 12:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.975 nvme0n1 00:27:22.975 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.975 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.975 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.975 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.975 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.975 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTQ2ZjcwZTY0YjM0NDQ2OTlkMzQ5M2E3ZDVhMDY4ZjdrRZMS: 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: ]] 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDQxZDQ5ZTE0YWViMjgzYzZjZGFjNTUxYWQxNmUxNTcwCYH5: 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.976 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.544 nvme0n1 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDc2ZjEyYjFlNzJkM2Q5Mjc1MjJlZDJkMmE3ODEzMjYwNTRkZWZhNDNiYzQ0OTA4O23W2Q==: 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWRkZjU3OTYzNGRhZWMyODU3NzJmMmExMmZlZDcxNmYOARNF: 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.544 12:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.111 nvme0n1 00:27:24.111 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.111 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.111 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.111 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.111 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.111 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmExOTA3YWJkOTMzMWUyZjBmOGRiZjljMjdlOGM2MjBkMWVhZWFiNzkxNGJlMjVlZjA0OGE3NjFhMzVhZGIzOecJi70=: 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.112 12:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.678 nvme0n1 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGE2NmIwNDY4ZDJkMDk1Mzg4OWE4NjQ0ZDNmMjQ5MTc0ZDAyOGRiY2M0NTYxNDJhsom/9Q==: 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: ]] 00:27:24.678 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTc0ZWUyOTkwZWMyOGM0MDFkZTBlYmNkYTdiNGExZmY3NDg3OTQzMDQxYTA0OWIxfXPkvg==: 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.937 request: 00:27:24.937 { 00:27:24.937 "name": "nvme0", 00:27:24.937 "trtype": "tcp", 00:27:24.937 "traddr": "10.0.0.1", 00:27:24.937 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:24.937 "adrfam": "ipv4", 00:27:24.937 "trsvcid": "4420", 00:27:24.937 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:24.937 "method": "bdev_nvme_attach_controller", 00:27:24.937 "req_id": 1 00:27:24.937 } 00:27:24.937 Got JSON-RPC error response 00:27:24.937 response: 00:27:24.937 { 00:27:24.937 "code": -32602, 00:27:24.937 "message": "Invalid parameters" 00:27:24.937 } 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.937 request: 00:27:24.937 { 00:27:24.937 "name": "nvme0", 00:27:24.937 "trtype": "tcp", 00:27:24.937 "traddr": "10.0.0.1", 00:27:24.937 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:24.937 "adrfam": "ipv4", 00:27:24.937 "trsvcid": "4420", 00:27:24.937 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:24.937 "dhchap_key": "key2", 00:27:24.937 "method": "bdev_nvme_attach_controller", 00:27:24.937 "req_id": 1 00:27:24.937 } 00:27:24.937 Got JSON-RPC error response 00:27:24.937 response: 00:27:24.937 { 00:27:24.937 "code": -32602, 00:27:24.937 "message": "Invalid parameters" 00:27:24.937 } 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.937 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.196 request: 00:27:25.196 { 00:27:25.196 "name": "nvme0", 00:27:25.196 "trtype": "tcp", 00:27:25.196 "traddr": "10.0.0.1", 00:27:25.196 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:25.196 "adrfam": "ipv4", 00:27:25.196 "trsvcid": "4420", 00:27:25.196 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:25.196 "dhchap_key": "key1", 00:27:25.196 "dhchap_ctrlr_key": "ckey2", 00:27:25.196 "method": "bdev_nvme_attach_controller", 00:27:25.196 "req_id": 1 00:27:25.196 } 00:27:25.196 Got JSON-RPC error response 00:27:25.196 response: 00:27:25.196 { 00:27:25.196 "code": -32602, 00:27:25.196 "message": "Invalid parameters" 00:27:25.196 } 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:25.196 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:25.197 rmmod nvme_tcp 00:27:25.197 rmmod nvme_fabrics 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2269096 ']' 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2269096 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 2269096 ']' 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 2269096 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2269096 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2269096' 00:27:25.197 killing process with pid 2269096 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 2269096 00:27:25.197 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 2269096 00:27:25.456 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.456 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.456 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.456 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.456 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.456 12:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.456 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.456 12:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:27.990 12:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:30.523 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:30.523 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:30.523 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:30.523 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:30.523 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:30.523 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:30.523 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:30.523 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:30.782 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:30.782 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:30.782 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:30.782 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:30.782 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:30.782 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:30.782 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:30.782 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:32.159 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:32.418 12:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Iqx /tmp/spdk.key-null.P0k /tmp/spdk.key-sha256.iiV /tmp/spdk.key-sha384.4tF /tmp/spdk.key-sha512.v8i /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:32.418 12:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:34.952 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:34.952 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:35.240 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:35.240 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:35.240 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:35.240 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:35.240 00:27:35.240 real 0m52.217s 00:27:35.240 user 0m44.885s 00:27:35.240 sys 0m14.298s 00:27:35.240 12:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:35.240 12:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.240 ************************************ 00:27:35.240 END TEST nvmf_auth_host 00:27:35.240 ************************************ 00:27:35.240 12:29:03 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:27:35.240 12:29:03 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:35.240 12:29:03 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:35.240 12:29:03 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:35.241 12:29:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.241 ************************************ 00:27:35.241 START TEST nvmf_digest 00:27:35.241 ************************************ 00:27:35.241 12:29:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:35.505 * Looking for test storage... 00:27:35.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.505 12:29:03 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.506 12:29:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:42.064 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:42.064 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:42.064 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:42.065 Found net devices under 0000:af:00.0: cvl_0_0 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:42.065 Found net devices under 0000:af:00.1: cvl_0_1 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.065 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:42.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:27:42.323 00:27:42.323 --- 10.0.0.2 ping statistics --- 00:27:42.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.323 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:27:42.323 00:27:42.323 --- 10.0.0.1 ping statistics --- 00:27:42.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.323 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:42.323 ************************************ 00:27:42.323 START TEST nvmf_digest_clean 00:27:42.323 ************************************ 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:42.323 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2282997 00:27:42.324 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2282997 00:27:42.324 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:42.324 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2282997 ']' 00:27:42.324 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.324 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:42.324 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.324 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:42.324 12:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:42.324 [2024-05-15 12:29:10.748984] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:27:42.324 [2024-05-15 12:29:10.749028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.324 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.324 [2024-05-15 12:29:10.822645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.582 [2024-05-15 12:29:10.890082] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.582 [2024-05-15 12:29:10.890123] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.582 [2024-05-15 12:29:10.890137] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.582 [2024-05-15 12:29:10.890148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.582 [2024-05-15 12:29:10.890162] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.582 [2024-05-15 12:29:10.890203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.147 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:43.147 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:27:43.147 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.148 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:43.148 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.148 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:43.148 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:43.148 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:43.148 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.148 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 null0 00:27:43.148 [2024-05-15 12:29:11.665685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.406 [2024-05-15 12:29:11.689689] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:43.406 [2024-05-15 12:29:11.689939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2283042 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2283042 /var/tmp/bperf.sock 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2283042 ']' 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:43.406 12:29:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.406 [2024-05-15 12:29:11.743918] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:27:43.406 [2024-05-15 12:29:11.743963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283042 ] 00:27:43.406 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.406 [2024-05-15 12:29:11.814207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.406 [2024-05-15 12:29:11.888574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.339 12:29:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:44.339 12:29:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:27:44.339 12:29:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:44.339 12:29:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:44.339 12:29:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:44.339 12:29:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.339 12:29:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.902 nvme0n1 00:27:44.902 12:29:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:44.902 12:29:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.902 Running I/O for 2 seconds... 00:27:46.797 00:27:46.797 Latency(us) 00:27:46.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.797 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:46.797 nvme0n1 : 2.00 28665.01 111.97 0.00 0.00 4459.63 2477.26 14889.78 00:27:46.797 =================================================================================================================== 00:27:46.797 Total : 28665.01 111.97 0.00 0.00 4459.63 2477.26 14889.78 00:27:46.797 0 00:27:46.797 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:46.797 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:46.797 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:46.797 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:46.797 | select(.opcode=="crc32c") 00:27:46.797 | "\(.module_name) \(.executed)"' 00:27:46.797 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2283042 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2283042 ']' 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2283042 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2283042 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2283042' 00:27:47.055 killing process with pid 2283042 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2283042 00:27:47.055 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.055 00:27:47.055 Latency(us) 00:27:47.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.055 =================================================================================================================== 00:27:47.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.055 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2283042 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2283827 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2283827 /var/tmp/bperf.sock 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2283827 ']' 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.313 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:47.314 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.314 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:47.314 12:29:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:47.314 [2024-05-15 12:29:15.766185] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:27:47.314 [2024-05-15 12:29:15.766241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283827 ] 00:27:47.314 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:47.314 Zero copy mechanism will not be used. 00:27:47.314 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.314 [2024-05-15 12:29:15.835894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.598 [2024-05-15 12:29:15.905115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.162 12:29:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:48.162 12:29:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:27:48.162 12:29:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:48.162 12:29:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:48.162 12:29:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:48.419 12:29:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.419 12:29:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.676 nvme0n1 00:27:48.676 12:29:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:48.676 12:29:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:48.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.676 Zero copy mechanism will not be used. 00:27:48.676 Running I/O for 2 seconds... 00:27:51.201 00:27:51.201 Latency(us) 00:27:51.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.201 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:51.201 nvme0n1 : 2.00 2811.95 351.49 0.00 0.00 5687.12 1821.90 23068.67 00:27:51.201 =================================================================================================================== 00:27:51.201 Total : 2811.95 351.49 0.00 0.00 5687.12 1821.90 23068.67 00:27:51.201 0 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:51.201 | select(.opcode=="crc32c") 00:27:51.201 | "\(.module_name) \(.executed)"' 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2283827 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2283827 ']' 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2283827 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2283827 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2283827' 00:27:51.201 killing process with pid 2283827 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2283827 00:27:51.201 Received shutdown signal, test time was about 2.000000 seconds 00:27:51.201 00:27:51.201 Latency(us) 00:27:51.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.201 =================================================================================================================== 00:27:51.201 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2283827 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2284381 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2284381 /var/tmp/bperf.sock 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2284381 ']' 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:51.201 12:29:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.201 [2024-05-15 12:29:19.687611] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:27:51.201 [2024-05-15 12:29:19.687664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284381 ] 00:27:51.201 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.459 [2024-05-15 12:29:19.757829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.459 [2024-05-15 12:29:19.832157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.023 12:29:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:52.023 12:29:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:27:52.023 12:29:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:52.023 12:29:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:52.023 12:29:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:52.280 12:29:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.280 12:29:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.537 nvme0n1 00:27:52.537 12:29:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:52.537 12:29:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:52.794 Running I/O for 2 seconds... 00:27:54.690 00:27:54.690 Latency(us) 00:27:54.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.690 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:54.690 nvme0n1 : 2.00 28852.07 112.70 0.00 0.00 4430.37 3211.26 13946.06 00:27:54.690 =================================================================================================================== 00:27:54.690 Total : 28852.07 112.70 0.00 0.00 4430.37 3211.26 13946.06 00:27:54.690 0 00:27:54.690 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:54.690 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:54.690 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:54.690 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:54.690 | select(.opcode=="crc32c") 00:27:54.690 | "\(.module_name) \(.executed)"' 00:27:54.691 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:54.948 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:54.948 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:54.948 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2284381 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2284381 ']' 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2284381 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2284381 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2284381' 00:27:54.949 killing process with pid 2284381 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2284381 00:27:54.949 Received shutdown signal, test time was about 2.000000 seconds 00:27:54.949 00:27:54.949 Latency(us) 00:27:54.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.949 =================================================================================================================== 00:27:54.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.949 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2284381 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2285178 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2285178 /var/tmp/bperf.sock 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 2285178 ']' 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:55.224 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:55.225 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:55.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:55.225 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:55.225 12:29:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.225 [2024-05-15 12:29:23.646225] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:27:55.225 [2024-05-15 12:29:23.646279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285178 ] 00:27:55.225 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:55.225 Zero copy mechanism will not be used. 00:27:55.225 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.225 [2024-05-15 12:29:23.714394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.501 [2024-05-15 12:29:23.786201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.066 12:29:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:56.066 12:29:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:27:56.066 12:29:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:56.066 12:29:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:56.066 12:29:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:56.323 12:29:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.323 12:29:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.581 nvme0n1 00:27:56.581 12:29:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:56.581 12:29:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.838 Zero copy mechanism will not be used. 00:27:56.838 Running I/O for 2 seconds... 00:27:58.734 00:27:58.734 Latency(us) 00:27:58.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.734 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:58.734 nvme0n1 : 2.01 1959.60 244.95 0.00 0.00 8147.79 5138.02 25690.11 00:27:58.734 =================================================================================================================== 00:27:58.734 Total : 1959.60 244.95 0.00 0.00 8147.79 5138.02 25690.11 00:27:58.734 0 00:27:58.734 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:58.734 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:58.734 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:58.734 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:58.734 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:58.734 | select(.opcode=="crc32c") 00:27:58.734 | "\(.module_name) \(.executed)"' 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2285178 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2285178 ']' 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2285178 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2285178 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2285178' 00:27:58.991 killing process with pid 2285178 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2285178 00:27:58.991 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.991 00:27:58.991 Latency(us) 00:27:58.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.991 =================================================================================================================== 00:27:58.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.991 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2285178 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2282997 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 2282997 ']' 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 2282997 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2282997 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2282997' 00:27:59.250 killing process with pid 2282997 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 2282997 00:27:59.250 [2024-05-15 12:29:27.649447] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:59.250 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 2282997 00:27:59.508 00:27:59.508 real 0m17.165s 00:27:59.508 user 0m32.886s 00:27:59.508 sys 0m4.502s 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.508 ************************************ 00:27:59.508 END TEST nvmf_digest_clean 00:27:59.508 ************************************ 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:59.508 ************************************ 00:27:59.508 START TEST nvmf_digest_error 00:27:59.508 ************************************ 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2285898 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2285898 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2285898 ']' 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:59.508 12:29:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:59.508 [2024-05-15 12:29:28.006756] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:27:59.508 [2024-05-15 12:29:28.006800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.766 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.766 [2024-05-15 12:29:28.083013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.766 [2024-05-15 12:29:28.151522] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.766 [2024-05-15 12:29:28.151564] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.766 [2024-05-15 12:29:28.151577] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.766 [2024-05-15 12:29:28.151587] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.766 [2024-05-15 12:29:28.151596] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.766 [2024-05-15 12:29:28.151630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.332 [2024-05-15 12:29:28.841775] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.332 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.590 null0 00:28:00.590 [2024-05-15 12:29:28.934150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.590 [2024-05-15 12:29:28.958155] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:00.590 [2024-05-15 12:29:28.958403] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2286043 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2286043 /var/tmp/bperf.sock 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2286043 ']' 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:00.590 12:29:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.590 [2024-05-15 12:29:29.012109] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:00.590 [2024-05-15 12:29:29.012153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286043 ] 00:28:00.590 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.590 [2024-05-15 12:29:29.081860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.848 [2024-05-15 12:29:29.156664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.412 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:01.412 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:28:01.412 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:01.412 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:01.669 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:01.669 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.669 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.669 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.669 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.669 12:29:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.926 nvme0n1 00:28:01.926 12:29:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:01.926 12:29:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.926 12:29:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.926 12:29:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.926 12:29:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:01.926 12:29:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.184 Running I/O for 2 seconds... 00:28:02.184 [2024-05-15 12:29:30.519463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.184 [2024-05-15 12:29:30.519497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.184 [2024-05-15 12:29:30.519510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.184 [2024-05-15 12:29:30.529907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.184 [2024-05-15 12:29:30.529931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.184 [2024-05-15 12:29:30.529943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.184 [2024-05-15 12:29:30.538618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.538640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.538651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.548127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.548148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.548159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.556800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.556821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.556833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.568495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.568516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.568528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.579103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.579124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.579135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.587698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.587719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.587729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.598873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.598894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.598905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.608444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.608466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.608477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.617759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.617781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.617792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.629241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.629263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.629275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.637561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.637583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.637598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.650427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.650448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.650459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.660492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.660514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.660524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.668644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.668665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.668677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.678508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.678531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.678542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.686719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.686741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.686753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.696209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.696232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.696242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.185 [2024-05-15 12:29:30.705392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.185 [2024-05-15 12:29:30.705413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.185 [2024-05-15 12:29:30.705424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.443 [2024-05-15 12:29:30.714897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.443 [2024-05-15 12:29:30.714922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.443 [2024-05-15 12:29:30.714934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.443 [2024-05-15 12:29:30.725054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.443 [2024-05-15 12:29:30.725081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.443 [2024-05-15 12:29:30.725092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.443 [2024-05-15 12:29:30.733350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.443 [2024-05-15 12:29:30.733372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.443 [2024-05-15 12:29:30.733383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.443 [2024-05-15 12:29:30.743130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.443 [2024-05-15 12:29:30.743152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.443 [2024-05-15 12:29:30.743163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.443 [2024-05-15 12:29:30.752009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.443 [2024-05-15 12:29:30.752030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.443 [2024-05-15 12:29:30.752040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.443 [2024-05-15 12:29:30.761391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.443 [2024-05-15 12:29:30.761412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.443 [2024-05-15 12:29:30.761422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.443 [2024-05-15 12:29:30.769761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.769783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.769794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.780117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.780139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.780150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.788437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.788459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.788469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.797854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.797878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.797888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.806680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.806702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.806712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.816132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.816155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.816165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.825900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.825924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.825934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.833987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.834011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.834022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.845442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.845465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.845476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.856185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.856210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.856221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.866144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.866166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.866177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.875327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.875348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.875359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.883629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.883651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.883665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.893803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.893825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.893836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.902922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.902944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.902955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.912073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.912094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.912104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.920760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.920781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.920791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.929619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.929641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.929652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.939097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.939118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.939129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.946507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.946528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.946539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.956544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.956565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.956576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.444 [2024-05-15 12:29:30.965971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.444 [2024-05-15 12:29:30.965996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.444 [2024-05-15 12:29:30.966007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.702 [2024-05-15 12:29:30.974514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.702 [2024-05-15 12:29:30.974539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.702 [2024-05-15 12:29:30.974551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.702 [2024-05-15 12:29:30.983918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.702 [2024-05-15 12:29:30.983941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.702 [2024-05-15 12:29:30.983953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.702 [2024-05-15 12:29:30.993112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:30.993134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:30.993146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.001840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.001862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.001872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.009946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.009968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.009978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.019217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.019239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.019249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.027862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.027884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.027895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.036796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.036819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.036830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.046008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.046030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.046040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.055392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.055414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.055424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.062981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.063001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.063012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.073840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.073863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.073874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.081151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.081172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.081183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.091343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.091365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.091376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.100517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.100539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.100550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.108424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.108445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.108456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.117821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.117842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.117856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.126508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.126530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.126541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.136187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.136213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.136224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.143755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.143776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.143786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.154623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.154646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.154656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.163383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.163405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.163415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.171276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.171297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.171307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.180533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.180554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.180565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.189456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.189487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.198093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.198114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.198124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.207660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.207681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.207692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.216071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.216092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.216103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.703 [2024-05-15 12:29:31.225806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.703 [2024-05-15 12:29:31.225828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.703 [2024-05-15 12:29:31.225839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.233779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.233804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.233815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.244284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.244308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.244319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.253060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.253083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.253094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.262476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.262498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.262509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.271207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.271229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.271243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.279615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.279637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.279647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.289597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.289618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.289629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.297115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.297136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.297146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.307155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.307177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.307187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.315964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.315985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.315996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.326017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.326039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.326049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.333721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.333743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.333753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.343099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.343120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.343131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.351788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.351813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.351824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.360951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.360973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.360984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.369891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.369912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.369923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.378881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.378903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.378913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.388600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.388621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.388631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.396520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.396541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.396551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.406435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.406455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.406466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.414988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.415010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.415021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.424217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.424239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.424249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.433650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.433672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.962 [2024-05-15 12:29:31.433682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.962 [2024-05-15 12:29:31.441749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.962 [2024-05-15 12:29:31.441770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.963 [2024-05-15 12:29:31.441781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.963 [2024-05-15 12:29:31.450962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.963 [2024-05-15 12:29:31.450983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.963 [2024-05-15 12:29:31.450993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.963 [2024-05-15 12:29:31.458969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.963 [2024-05-15 12:29:31.458989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.963 [2024-05-15 12:29:31.458999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.963 [2024-05-15 12:29:31.469134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.963 [2024-05-15 12:29:31.469154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.963 [2024-05-15 12:29:31.469165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.963 [2024-05-15 12:29:31.477343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.963 [2024-05-15 12:29:31.477363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.963 [2024-05-15 12:29:31.477373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.963 [2024-05-15 12:29:31.486567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:02.963 [2024-05-15 12:29:31.486588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.963 [2024-05-15 12:29:31.486598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.495928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.495952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.495974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.503993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.504015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.504030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.514336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.514358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.514369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.522801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.522824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.522835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.532130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.532153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.532163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.539973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.539994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.540005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.550104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.550125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.550135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.558383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.558403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.558414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.567552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.567572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.567582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.576433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.576453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.576463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.584957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.584981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.584992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.594049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.594070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.594080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.605102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.605123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.605133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.614817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.614837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.614847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.623154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.623174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.623185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.632632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.632653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.632664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.640742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.640762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.640773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.650330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.650351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.650361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.657941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.657963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.657973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.667834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.667855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.667866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.676349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.676370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.676380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.685844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.685864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.685874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.694268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.694288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.694298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.702998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.703019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.703029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.711800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.221 [2024-05-15 12:29:31.711821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.221 [2024-05-15 12:29:31.711831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.221 [2024-05-15 12:29:31.720945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.222 [2024-05-15 12:29:31.720966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.222 [2024-05-15 12:29:31.720976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.222 [2024-05-15 12:29:31.729816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.222 [2024-05-15 12:29:31.729838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.222 [2024-05-15 12:29:31.729848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.222 [2024-05-15 12:29:31.738407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.222 [2024-05-15 12:29:31.738428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.222 [2024-05-15 12:29:31.738444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.222 [2024-05-15 12:29:31.748163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.222 [2024-05-15 12:29:31.748196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.222 [2024-05-15 12:29:31.748209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.755913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.755938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.755950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.765146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.765168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.765179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.774645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.774666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.774676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.782654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.782675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.782685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.791843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.791864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.791874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.800960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.800981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.800992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.809783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.809805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.809815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.818872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.818893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.818905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.827577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.827598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.827609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.836459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.836480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.836491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.845089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.845110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.845120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.854452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.854473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.854484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.862975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.862997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.863008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.872894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.872915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.872926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.881493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.881514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.881524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.480 [2024-05-15 12:29:31.889559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.480 [2024-05-15 12:29:31.889581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.480 [2024-05-15 12:29:31.889595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.899393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.899415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.899425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.907767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.907788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.907798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.916604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.916624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.916634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.925644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.925665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.925675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.934454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.934476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.934486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.943781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.943801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.943811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.951504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.951525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.951535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.960880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.960900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.960911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.970852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.970877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.970887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.978645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.978666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.978677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.988114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.988135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.988146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:31.996583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:31.996604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:31.996614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.481 [2024-05-15 12:29:32.005840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.481 [2024-05-15 12:29:32.005863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.481 [2024-05-15 12:29:32.005875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.739 [2024-05-15 12:29:32.016165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.739 [2024-05-15 12:29:32.016189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.739 [2024-05-15 12:29:32.016205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.739 [2024-05-15 12:29:32.025157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.739 [2024-05-15 12:29:32.025178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.739 [2024-05-15 12:29:32.025188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.739 [2024-05-15 12:29:32.036085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.036106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.036117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.047318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.047340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.047350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.055264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.055284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.055295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.066195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.066216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.066226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.075885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.075906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.075917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.084858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.084878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.084889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.094051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.094072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.094082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.102870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.102891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.102901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.112244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.112265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.112275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.121608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.121630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.121640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.130582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.130603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.130617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.143298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.143319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.143330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.151857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.151877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.151888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.161126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.161147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.161157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.169630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.169650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.169660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.183491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.183512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.183522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.192182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.192209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.192219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.200720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.200740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.200751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.209211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.209231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.209242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.218703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.218727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.218738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.232291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.232313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.232323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.240937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.240957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.240968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.249776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.249796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.249806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.257717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.257737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.257748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.740 [2024-05-15 12:29:32.267880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.740 [2024-05-15 12:29:32.267904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.740 [2024-05-15 12:29:32.267916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.276275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.276300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.276312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.285018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.285040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.285051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.294394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.294417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.294427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.304037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.304059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.304070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.312395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.312416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.312427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.321415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.321437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.321448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.330525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.330546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.330558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.338458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.338479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.338489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.352636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.998 [2024-05-15 12:29:32.352658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.998 [2024-05-15 12:29:32.352669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.998 [2024-05-15 12:29:32.362722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.362744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.362754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.372239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.372259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.372269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.380986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.381007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.381021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.390010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.390031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.390041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.398117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.398138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.398149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.407365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.407387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.416804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.416825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.416835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.424908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.424931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.424942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.433804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.433827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.433838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.443220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.443241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.443252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.451481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.451503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.451513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.461060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.461082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.461093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.470179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.470207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.470218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.478935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.478956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.478967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.488059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.488081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.488091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.496021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.496042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.496053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 [2024-05-15 12:29:32.504876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x749c40) 00:28:03.999 [2024-05-15 12:29:32.504897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.999 [2024-05-15 12:29:32.504907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.999 00:28:03.999 Latency(us) 00:28:03.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.999 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:03.999 nvme0n1 : 2.00 27635.11 107.95 0.00 0.00 4625.77 2254.44 16357.79 00:28:03.999 =================================================================================================================== 00:28:03.999 Total : 27635.11 107.95 0.00 0.00 4625.77 2254.44 16357.79 00:28:03.999 0 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:04.257 | .driver_specific 00:28:04.257 | .nvme_error 00:28:04.257 | .status_code 00:28:04.257 | .command_transient_transport_error' 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2286043 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2286043 ']' 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2286043 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2286043 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2286043' 00:28:04.257 killing process with pid 2286043 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2286043 00:28:04.257 Received shutdown signal, test time was about 2.000000 seconds 00:28:04.257 00:28:04.257 Latency(us) 00:28:04.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.257 =================================================================================================================== 00:28:04.257 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.257 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2286043 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2286827 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2286827 /var/tmp/bperf.sock 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2286827 ']' 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:04.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:04.514 12:29:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:04.514 [2024-05-15 12:29:33.010870] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:04.514 [2024-05-15 12:29:33.010922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286827 ] 00:28:04.514 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:04.514 Zero copy mechanism will not be used. 00:28:04.514 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.771 [2024-05-15 12:29:33.081012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.771 [2024-05-15 12:29:33.155055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.335 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:05.335 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:28:05.335 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:05.335 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:05.593 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:05.593 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.593 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.593 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.593 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.593 12:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.850 nvme0n1 00:28:05.850 12:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:05.850 12:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.850 12:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.850 12:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.850 12:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:05.850 12:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:05.850 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.850 Zero copy mechanism will not be used. 00:28:05.850 Running I/O for 2 seconds... 00:28:05.850 [2024-05-15 12:29:34.342182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:05.850 [2024-05-15 12:29:34.342223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.850 [2024-05-15 12:29:34.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:05.850 [2024-05-15 12:29:34.355161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:05.850 [2024-05-15 12:29:34.355187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.850 [2024-05-15 12:29:34.355206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:05.850 [2024-05-15 12:29:34.365934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:05.850 [2024-05-15 12:29:34.365958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.850 [2024-05-15 12:29:34.365969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:05.850 [2024-05-15 12:29:34.376405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:05.850 [2024-05-15 12:29:34.376435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.850 [2024-05-15 12:29:34.376456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.108 [2024-05-15 12:29:34.386845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.108 [2024-05-15 12:29:34.386870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.108 [2024-05-15 12:29:34.386882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.108 [2024-05-15 12:29:34.397272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.108 [2024-05-15 12:29:34.397294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.108 [2024-05-15 12:29:34.397305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.108 [2024-05-15 12:29:34.407791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.108 [2024-05-15 12:29:34.407813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.108 [2024-05-15 12:29:34.407824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.108 [2024-05-15 12:29:34.418079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.108 [2024-05-15 12:29:34.418100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.108 [2024-05-15 12:29:34.418111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.108 [2024-05-15 12:29:34.428361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.108 [2024-05-15 12:29:34.428382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.108 [2024-05-15 12:29:34.428393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.108 [2024-05-15 12:29:34.438724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.438747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.438758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.448994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.449016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.449026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.459286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.459307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.459317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.469624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.469649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.469660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.480092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.480115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.480126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.490389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.490411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.490422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.501344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.501367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.501378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.511784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.511806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.511817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.522289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.522311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.522322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.532724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.532747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.532757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.543017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.543038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.543049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.553483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.553505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.553516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.563746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.563768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.563778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.574248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.574270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.574281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.584530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.584551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.584562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.594906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.594929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.594940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.605249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.605271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.605281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.109 [2024-05-15 12:29:34.623497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.109 [2024-05-15 12:29:34.623518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.109 [2024-05-15 12:29:34.623529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.639383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.639408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.639419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.653681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.653704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.653715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.667057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.667083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.667094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.686128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.686151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.686162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.703171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.703208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.723246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.723267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.723278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.743302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.743324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.743334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.762829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.762850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.762861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.776208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.776229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.776240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.786991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.787012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.787023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.798445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.798465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.798476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.809228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.809249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.809259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.826069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.826089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.826100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.841255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.841277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.841287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.853412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.853433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.853444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.865255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.865276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.865286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.876307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.876328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.876338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.367 [2024-05-15 12:29:34.895527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.367 [2024-05-15 12:29:34.895551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.367 [2024-05-15 12:29:34.895562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:34.915636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:34.915660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:34.915670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:34.935600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:34.935622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:34.935636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:34.949127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:34.949148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:34.949159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:34.959688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:34.959708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:34.959719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:34.969900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:34.969921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:34.969932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:34.980863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:34.980883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:34.980894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:34.998438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:34.998460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:34.998470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:35.018486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:35.018507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:35.018518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:35.038722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:35.038743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:35.038753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:35.059111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:35.059131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:35.059142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:35.074665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:35.074690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:35.074700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:35.089549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:35.089571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:35.089581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:35.101307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:35.101328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:35.101338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:35.119749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:35.119770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:35.119780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.625 [2024-05-15 12:29:35.140352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.625 [2024-05-15 12:29:35.140373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.625 [2024-05-15 12:29:35.140383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.160760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.160784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.160795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.181183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.181210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.181221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.196409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.196430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.196441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.208101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.208122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.208135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.219082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.219103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.219113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.230065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.230086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.230096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.248934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.248955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.248966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.261926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.261946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.261957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.272204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.272226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.272237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.282441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.282463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.282473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.292820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.292841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.292851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.303093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.303114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.303124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.313387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.313411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.883 [2024-05-15 12:29:35.313421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.883 [2024-05-15 12:29:35.323592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.883 [2024-05-15 12:29:35.323612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.323623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.884 [2024-05-15 12:29:35.333967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.884 [2024-05-15 12:29:35.333989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.333999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.884 [2024-05-15 12:29:35.344394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.884 [2024-05-15 12:29:35.344415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.344425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.884 [2024-05-15 12:29:35.354645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.884 [2024-05-15 12:29:35.354665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.354676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.884 [2024-05-15 12:29:35.364875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.884 [2024-05-15 12:29:35.364895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.364905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:06.884 [2024-05-15 12:29:35.375120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.884 [2024-05-15 12:29:35.375140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.375151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:06.884 [2024-05-15 12:29:35.385370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.884 [2024-05-15 12:29:35.385390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.385400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:06.884 [2024-05-15 12:29:35.395600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.884 [2024-05-15 12:29:35.395621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.395631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:06.884 [2024-05-15 12:29:35.405845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:06.884 [2024-05-15 12:29:35.405866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.884 [2024-05-15 12:29:35.405877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.416167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.416207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.416220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.426505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.426527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.426538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.436734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.436756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.436767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.446986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.447007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.447017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.457227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.457248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.457258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.467461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.467482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.467492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.477717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.477737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.477748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.487944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.487965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.487980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.498153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.498174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.498184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.508401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.508422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.508432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.518627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.518647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.518658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.528855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.528875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.528886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.539085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.539107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.539117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.549458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.549478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.549488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.559734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.559754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.559764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.570052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.570074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.570087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.580490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.580514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.580525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.590762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.590783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.590794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.601107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.601128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.601139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.611481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.611504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.611515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.621851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.621873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.621884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.632179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.632209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.142 [2024-05-15 12:29:35.632220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.142 [2024-05-15 12:29:35.642574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.142 [2024-05-15 12:29:35.642595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.143 [2024-05-15 12:29:35.642606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.143 [2024-05-15 12:29:35.652931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.143 [2024-05-15 12:29:35.652953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.143 [2024-05-15 12:29:35.652963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.143 [2024-05-15 12:29:35.663319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.143 [2024-05-15 12:29:35.663341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.143 [2024-05-15 12:29:35.663355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.673704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.673728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.673740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.684119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.684143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.684154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.694521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.694544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.694555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.704917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.704939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.704950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.715302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.715323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.715333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.725622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.725643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.725654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.735999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.736021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.736032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.746375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.746397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.746407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.756738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.756764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.756775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.767102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.767123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.767133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.777748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.777769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.777780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.788185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.788213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.788223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.798540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.798561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.798571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.808895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.808916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.808926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.819277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.819299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.819310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.829776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.401 [2024-05-15 12:29:35.829800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.401 [2024-05-15 12:29:35.829813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.401 [2024-05-15 12:29:35.840057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.840080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.840091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.402 [2024-05-15 12:29:35.850492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.850514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.850524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.402 [2024-05-15 12:29:35.860851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.860873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.860884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.402 [2024-05-15 12:29:35.871201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.871222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.871232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.402 [2024-05-15 12:29:35.881523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.881544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.881555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.402 [2024-05-15 12:29:35.891840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.891861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.891871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.402 [2024-05-15 12:29:35.902179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.902206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.902216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.402 [2024-05-15 12:29:35.912586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.912607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.912617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.402 [2024-05-15 12:29:35.922903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.402 [2024-05-15 12:29:35.922923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.402 [2024-05-15 12:29:35.922934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:35.933398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:35.933422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:35.933438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:35.943726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:35.943750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:35.943761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:35.954021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:35.954044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:35.954054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:35.964466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:35.964487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:35.964497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:35.974715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:35.974737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:35.974748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:35.984961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:35.984983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:35.984993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:35.995288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:35.995310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:35.995320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.005607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.005629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.005639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.015891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.015912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.015922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.026218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.026239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.026249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.036474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.036495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.036506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.046711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.046733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.046743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.056971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.056991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.057002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.067275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.067296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.067307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.077558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.077579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.077591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.087847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.660 [2024-05-15 12:29:36.087868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.660 [2024-05-15 12:29:36.087879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.660 [2024-05-15 12:29:36.098181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.098209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.098220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.661 [2024-05-15 12:29:36.108573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.108595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.108609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.661 [2024-05-15 12:29:36.118904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.118926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.118936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.661 [2024-05-15 12:29:36.129159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.129179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.129196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.661 [2024-05-15 12:29:36.139399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.139421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.139431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.661 [2024-05-15 12:29:36.149648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.149669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.149679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.661 [2024-05-15 12:29:36.159998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.160019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.160029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.661 [2024-05-15 12:29:36.170257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.170278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.170288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.661 [2024-05-15 12:29:36.180476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.661 [2024-05-15 12:29:36.180498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.661 [2024-05-15 12:29:36.180508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.918 [2024-05-15 12:29:36.190747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.918 [2024-05-15 12:29:36.190772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.918 [2024-05-15 12:29:36.190783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.918 [2024-05-15 12:29:36.200999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.918 [2024-05-15 12:29:36.201027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.918 [2024-05-15 12:29:36.201038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.918 [2024-05-15 12:29:36.211224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.918 [2024-05-15 12:29:36.211247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.918 [2024-05-15 12:29:36.211258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.918 [2024-05-15 12:29:36.221455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.918 [2024-05-15 12:29:36.221478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.918 [2024-05-15 12:29:36.221488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.918 [2024-05-15 12:29:36.231669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.918 [2024-05-15 12:29:36.231692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.918 [2024-05-15 12:29:36.231702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.918 [2024-05-15 12:29:36.241985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.919 [2024-05-15 12:29:36.242006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.919 [2024-05-15 12:29:36.242017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.919 [2024-05-15 12:29:36.252221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.919 [2024-05-15 12:29:36.252242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.919 [2024-05-15 12:29:36.252252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.919 [2024-05-15 12:29:36.262455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.919 [2024-05-15 12:29:36.262476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.919 [2024-05-15 12:29:36.262486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.919 [2024-05-15 12:29:36.272688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.919 [2024-05-15 12:29:36.272708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.919 [2024-05-15 12:29:36.272718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.919 [2024-05-15 12:29:36.282951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.919 [2024-05-15 12:29:36.282971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.919 [2024-05-15 12:29:36.282981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.919 [2024-05-15 12:29:36.293159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.919 [2024-05-15 12:29:36.293179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.919 [2024-05-15 12:29:36.293189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.919 [2024-05-15 12:29:36.303398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.919 [2024-05-15 12:29:36.303418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.919 [2024-05-15 12:29:36.303428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.919 [2024-05-15 12:29:36.313475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196eae0) 00:28:07.919 [2024-05-15 12:29:36.313497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.919 [2024-05-15 12:29:36.313507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.919 00:28:07.919 Latency(us) 00:28:07.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.919 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:07.919 nvme0n1 : 2.00 2629.41 328.68 0.00 0.00 6080.24 4587.52 28730.98 00:28:07.919 =================================================================================================================== 00:28:07.919 Total : 2629.41 328.68 0.00 0.00 6080.24 4587.52 28730.98 00:28:07.919 0 00:28:07.919 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:07.919 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:07.919 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:07.919 | .driver_specific 00:28:07.919 | .nvme_error 00:28:07.919 | .status_code 00:28:07.919 | .command_transient_transport_error' 00:28:07.919 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2286827 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2286827 ']' 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2286827 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2286827 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2286827' 00:28:08.177 killing process with pid 2286827 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2286827 00:28:08.177 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.177 00:28:08.177 Latency(us) 00:28:08.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.177 =================================================================================================================== 00:28:08.177 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.177 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2286827 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2287394 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2287394 /var/tmp/bperf.sock 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2287394 ']' 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:08.435 12:29:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.435 [2024-05-15 12:29:36.813078] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:08.435 [2024-05-15 12:29:36.813128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287394 ] 00:28:08.435 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.435 [2024-05-15 12:29:36.883410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.435 [2024-05-15 12:29:36.957545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.392 12:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.665 nvme0n1 00:28:09.665 12:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:09.665 12:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.665 12:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.665 12:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.665 12:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:09.665 12:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.665 Running I/O for 2 seconds... 00:28:09.665 [2024-05-15 12:29:38.132956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fcdd0 00:28:09.665 [2024-05-15 12:29:38.133963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.665 [2024-05-15 12:29:38.133991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:09.665 [2024-05-15 12:29:38.143181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.665 [2024-05-15 12:29:38.143408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.665 [2024-05-15 12:29:38.143430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.665 [2024-05-15 12:29:38.152324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.665 [2024-05-15 12:29:38.152514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.665 [2024-05-15 12:29:38.152535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.665 [2024-05-15 12:29:38.161404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.665 [2024-05-15 12:29:38.161606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.665 [2024-05-15 12:29:38.161627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.665 [2024-05-15 12:29:38.170481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.665 [2024-05-15 12:29:38.170690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.665 [2024-05-15 12:29:38.170710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.665 [2024-05-15 12:29:38.179437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.665 [2024-05-15 12:29:38.179649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.666 [2024-05-15 12:29:38.179668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.666 [2024-05-15 12:29:38.188510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.666 [2024-05-15 12:29:38.188716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.666 [2024-05-15 12:29:38.188735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.197832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.198043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.198067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.206921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.207128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.207149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.216009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.216222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.216243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.225099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.225315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.225336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.234129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.234345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.234365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.243186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.243398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.243417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.252249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.252454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.252474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.261338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.261543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.261562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.270351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.270556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.270578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.279399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.279605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.279624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.288455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.288660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.288679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.297500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.297710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.297729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.306511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.306713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.306732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.315563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.315770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.315791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.324584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.324788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.324807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.333631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.333835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.333855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.342645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.342848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.342868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.351654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.351864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.924 [2024-05-15 12:29:38.351882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.924 [2024-05-15 12:29:38.360691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.924 [2024-05-15 12:29:38.360896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.360915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.369741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.369945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.369964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.378807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.379012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.379031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.387860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.388071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.388090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.397020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.397226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.397245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.406085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.406293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.406313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.415133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.415347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.415366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.424150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.424362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.424382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.433187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.433398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.433418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.442247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.442457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.442476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:09.925 [2024-05-15 12:29:38.451444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:09.925 [2024-05-15 12:29:38.451646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.925 [2024-05-15 12:29:38.451669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.460685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.460892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.460915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.469768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.469973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.469993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.478779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.478982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.479003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.487823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.488027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.488047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.496831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.497033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.497052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.505873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.506077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.506098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.514909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.515114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.515132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.523877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.524083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.524103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.532896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.533102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.533121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.541920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.542127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.542146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.550945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.551150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.551169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.559984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.560197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.183 [2024-05-15 12:29:38.560216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.183 [2024-05-15 12:29:38.569132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.183 [2024-05-15 12:29:38.569344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.569363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.578229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.578437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.578456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.587280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.587488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.587507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.596310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.596518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.596537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.605349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.605555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.605574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.614386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.614593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.614611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.623304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.623514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.623532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.632339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.632546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.632565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.641382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.641604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.641622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.650529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.650734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.650752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.659552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.659756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.659774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.668577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.668780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.668799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.677656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.677868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.677888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.686692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.686884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.686902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.696014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.696219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.696238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.184 [2024-05-15 12:29:38.705057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.184 [2024-05-15 12:29:38.705263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.184 [2024-05-15 12:29:38.705282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.442 [2024-05-15 12:29:38.714307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.442 [2024-05-15 12:29:38.714513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.442 [2024-05-15 12:29:38.714536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.442 [2024-05-15 12:29:38.723426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.442 [2024-05-15 12:29:38.723635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.442 [2024-05-15 12:29:38.723657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.442 [2024-05-15 12:29:38.732462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.442 [2024-05-15 12:29:38.732670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.442 [2024-05-15 12:29:38.732690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.442 [2024-05-15 12:29:38.741402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.442 [2024-05-15 12:29:38.741613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.741635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.750443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.750648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.750668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.759466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.759670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.759689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.768486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.768691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.768710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.777526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.777732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.777752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.786547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.786750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.786770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.795632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.795838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.795857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.804692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.804900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.813751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.813957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.813976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.822798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.823007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.823026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.831831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.832037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.832057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.840846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.841051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.841072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.849852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.850056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.850076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.858909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.859120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.859140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.867935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.868141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.868160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.876999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.877207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.877226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.886036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.886246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.886265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.895091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.895310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.895330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.904353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.904567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.904586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.913667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.913878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.913897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.922713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.922922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.922941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.931758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.931965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.931991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.940796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.941008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.941029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.949843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.950049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.950068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.958887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.443 [2024-05-15 12:29:38.959091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.443 [2024-05-15 12:29:38.959110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.443 [2024-05-15 12:29:38.967964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.444 [2024-05-15 12:29:38.968175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.444 [2024-05-15 12:29:38.968203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.701 [2024-05-15 12:29:38.977297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.701 [2024-05-15 12:29:38.977502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.701 [2024-05-15 12:29:38.977529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.701 [2024-05-15 12:29:38.986350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.701 [2024-05-15 12:29:38.986558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.701 [2024-05-15 12:29:38.986578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.701 [2024-05-15 12:29:38.995393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.701 [2024-05-15 12:29:38.995601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:38.995620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.004445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.004652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.004672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.013488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.013695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.013714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.022532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.022741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.022760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.031696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.031904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.031923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.040775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.040979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.040998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.049825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.050033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.050052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.058889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.059100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.059120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.067958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.068164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.068183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.076993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.077205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.077224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.086022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.086224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.086254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.095063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.095263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.095283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.104089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.104296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.104315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.113140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.113369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.113388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.122214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.122424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.122445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.131269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.131471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.131491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.140310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.140524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.140545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.149471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.149684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.149705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.158693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.158903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.158923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.167756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.167962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.167983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.176790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.176996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.177015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.185830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.186038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.186057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.194883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.195090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.195109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.203934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.204139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.204159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.213183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.213404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.213427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.702 [2024-05-15 12:29:39.222268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.702 [2024-05-15 12:29:39.222473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.702 [2024-05-15 12:29:39.222493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.960 [2024-05-15 12:29:39.231485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.960 [2024-05-15 12:29:39.231694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.960 [2024-05-15 12:29:39.231718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.960 [2024-05-15 12:29:39.240620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.960 [2024-05-15 12:29:39.240831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.960 [2024-05-15 12:29:39.240853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.960 [2024-05-15 12:29:39.249690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.960 [2024-05-15 12:29:39.249899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.960 [2024-05-15 12:29:39.249919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.960 [2024-05-15 12:29:39.258762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.960 [2024-05-15 12:29:39.258971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.960 [2024-05-15 12:29:39.258991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.960 [2024-05-15 12:29:39.267836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.960 [2024-05-15 12:29:39.268044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.960 [2024-05-15 12:29:39.268064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.960 [2024-05-15 12:29:39.276879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.960 [2024-05-15 12:29:39.277084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.960 [2024-05-15 12:29:39.277103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.960 [2024-05-15 12:29:39.285957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.960 [2024-05-15 12:29:39.286162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.286181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.294918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.295126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.295145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.303975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.304179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.304202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.313022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.313228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.313247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.322077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.322282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.322301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.331150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.331365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.331384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.340210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.340417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.340437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.349253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.349458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.349477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.358295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.358537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.358557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.367348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.367552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.367571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.376392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.376599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.376618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.385361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.385568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.385587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.394392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.394615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.394634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.403453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.403664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.403683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.412635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.412840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.412859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.421680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.421887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.421907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.430714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.431151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.431170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.439687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.439873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.439893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.448711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.449178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.449204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.457736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.458151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.458170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.466713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:10.961 [2024-05-15 12:29:39.468829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.468848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.477769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f8e88 00:28:10.961 [2024-05-15 12:29:39.478938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.478957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:10.961 [2024-05-15 12:29:39.487122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f0ff8 00:28:10.961 [2024-05-15 12:29:39.488018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.961 [2024-05-15 12:29:39.488040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.219 [2024-05-15 12:29:39.495530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e8088 00:28:11.219 [2024-05-15 12:29:39.496736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.219 [2024-05-15 12:29:39.496759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.219 [2024-05-15 12:29:39.506304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f4b08 00:28:11.219 [2024-05-15 12:29:39.507239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.219 [2024-05-15 12:29:39.507259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:11.219 [2024-05-15 12:29:39.515514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f6458 00:28:11.219 [2024-05-15 12:29:39.515719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.219 [2024-05-15 12:29:39.515738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:11.219 [2024-05-15 12:29:39.524517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f6458 00:28:11.219 [2024-05-15 12:29:39.524711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.219 [2024-05-15 12:29:39.524730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:11.219 [2024-05-15 12:29:39.533517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f6458 00:28:11.219 [2024-05-15 12:29:39.534376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.219 [2024-05-15 12:29:39.534396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:11.219 [2024-05-15 12:29:39.544795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f3e60 00:28:11.219 [2024-05-15 12:29:39.545916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.219 [2024-05-15 12:29:39.545936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.553765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f3a28 00:28:11.220 [2024-05-15 12:29:39.554461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.554481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.562168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f7da8 00:28:11.220 [2024-05-15 12:29:39.562874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.562895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.570844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f3a28 00:28:11.220 [2024-05-15 12:29:39.571656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.571676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.579477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f7da8 00:28:11.220 [2024-05-15 12:29:39.580708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.580726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.587800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ec840 00:28:11.220 [2024-05-15 12:29:39.589642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.589661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.601651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eaab8 00:28:11.220 [2024-05-15 12:29:39.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.602489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.610678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fb480 00:28:11.220 [2024-05-15 12:29:39.610978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.610997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.619720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fb480 00:28:11.220 [2024-05-15 12:29:39.619936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.619955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.628716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fb480 00:28:11.220 [2024-05-15 12:29:39.628926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.628945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.637669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fb480 00:28:11.220 [2024-05-15 12:29:39.638910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.638929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.647525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.648485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.648505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.656626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.656817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.656837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.665780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.665963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.665982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.674799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.674982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.675001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.683854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.684035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.684053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.692853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.693029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.693051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.701921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.702092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.702111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.710941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.711129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.711149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.720129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eb760 00:28:11.220 [2024-05-15 12:29:39.721512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.721532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.730960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e99d8 00:28:11.220 [2024-05-15 12:29:39.732036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.732056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.739235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e7818 00:28:11.220 [2024-05-15 12:29:39.740046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.220 [2024-05-15 12:29:39.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.220 [2024-05-15 12:29:39.748054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eee38 00:28:11.478 [2024-05-15 12:29:39.748889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.478 [2024-05-15 12:29:39.748910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.478 [2024-05-15 12:29:39.756838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f5378 00:28:11.478 [2024-05-15 12:29:39.757530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.478 [2024-05-15 12:29:39.757553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.478 [2024-05-15 12:29:39.765478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f3a28 00:28:11.478 [2024-05-15 12:29:39.766288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.766309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.774133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fb480 00:28:11.479 [2024-05-15 12:29:39.775017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.775038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.782770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fdeb0 00:28:11.479 [2024-05-15 12:29:39.783527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.783548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.791407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fcdd0 00:28:11.479 [2024-05-15 12:29:39.792300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.800061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e99d8 00:28:11.479 [2024-05-15 12:29:39.800855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.800874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.808713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e84c0 00:28:11.479 [2024-05-15 12:29:39.809654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.809674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.817374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f5378 00:28:11.479 [2024-05-15 12:29:39.818205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.818241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.825978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:11.479 [2024-05-15 12:29:39.826922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.826941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.834568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e99d8 00:28:11.479 [2024-05-15 12:29:39.835392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.835413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.843670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ebb98 00:28:11.479 [2024-05-15 12:29:39.845583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.845603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.856254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ec408 00:28:11.479 [2024-05-15 12:29:39.856929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.856948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.865256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ee5c8 00:28:11.479 [2024-05-15 12:29:39.866331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.866350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.873902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f31b8 00:28:11.479 [2024-05-15 12:29:39.874972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.874991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.882581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ebfd0 00:28:11.479 [2024-05-15 12:29:39.883524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.883543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.891236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fef90 00:28:11.479 [2024-05-15 12:29:39.892245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.892264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.899858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e88f8 00:28:11.479 [2024-05-15 12:29:39.900899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.900919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.908532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ea680 00:28:11.479 [2024-05-15 12:29:39.909501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.909520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.917315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fc998 00:28:11.479 [2024-05-15 12:29:39.918363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.918383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.925975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fbcf0 00:28:11.479 [2024-05-15 12:29:39.926952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.926975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.934609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fb048 00:28:11.479 [2024-05-15 12:29:39.935640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.935661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.943235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f4b08 00:28:11.479 [2024-05-15 12:29:39.944243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.944263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.951867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f9f68 00:28:11.479 [2024-05-15 12:29:39.952836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.952855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.960530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f1430 00:28:11.479 [2024-05-15 12:29:39.961587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.961606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.969173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ed920 00:28:11.479 [2024-05-15 12:29:39.970234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.970253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.977853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190eee38 00:28:11.479 [2024-05-15 12:29:39.978828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.978848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.986499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e7c50 00:28:11.479 [2024-05-15 12:29:39.987470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.987489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.479 [2024-05-15 12:29:39.995139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f20d8 00:28:11.479 [2024-05-15 12:29:39.996116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.479 [2024-05-15 12:29:39.996135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.480 [2024-05-15 12:29:40.003935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ed0b0 00:28:11.480 [2024-05-15 12:29:40.004930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.480 [2024-05-15 12:29:40.004951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.012841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ee190 00:28:11.737 [2024-05-15 12:29:40.013844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.013868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.022367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e9e10 00:28:11.737 [2024-05-15 12:29:40.023375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.023397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.031414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190ebb98 00:28:11.737 [2024-05-15 12:29:40.032415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.032437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.040324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e8d30 00:28:11.737 [2024-05-15 12:29:40.041324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.041345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.050177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190e99d8 00:28:11.737 [2024-05-15 12:29:40.051175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.051206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.059086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fcdd0 00:28:11.737 [2024-05-15 12:29:40.060087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.060109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.067966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fb480 00:28:11.737 [2024-05-15 12:29:40.068959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.068979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.076850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fe2e8 00:28:11.737 [2024-05-15 12:29:40.077845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.077866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.085760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fdeb0 00:28:11.737 [2024-05-15 12:29:40.086748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.737 [2024-05-15 12:29:40.086768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.737 [2024-05-15 12:29:40.094657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f3a28 00:28:11.738 [2024-05-15 12:29:40.095655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.738 [2024-05-15 12:29:40.095679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.738 [2024-05-15 12:29:40.103555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190f2948 00:28:11.738 [2024-05-15 12:29:40.104541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.738 [2024-05-15 12:29:40.104562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.738 [2024-05-15 12:29:40.112457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ba0) with pdu=0x2000190fda78 00:28:11.738 [2024-05-15 12:29:40.113469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.738 [2024-05-15 12:29:40.113488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:11.738 00:28:11.738 Latency(us) 00:28:11.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.738 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:11.738 nvme0n1 : 2.00 27943.17 109.15 0.00 0.00 4574.25 2346.19 18245.22 00:28:11.738 =================================================================================================================== 00:28:11.738 Total : 27943.17 109.15 0.00 0.00 4574.25 2346.19 18245.22 00:28:11.738 0 00:28:11.738 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:11.738 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:11.738 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:11.738 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:11.738 | .driver_specific 00:28:11.738 | .nvme_error 00:28:11.738 | .status_code 00:28:11.738 | .command_transient_transport_error' 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2287394 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2287394 ']' 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2287394 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2287394 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2287394' 00:28:11.995 killing process with pid 2287394 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2287394 00:28:11.995 Received shutdown signal, test time was about 2.000000 seconds 00:28:11.995 00:28:11.995 Latency(us) 00:28:11.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.995 =================================================================================================================== 00:28:11.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.995 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2287394 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2287999 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2287999 /var/tmp/bperf.sock 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2287999 ']' 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:12.253 12:29:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.253 [2024-05-15 12:29:40.629948] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:12.253 [2024-05-15 12:29:40.630003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287999 ] 00:28:12.253 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.253 Zero copy mechanism will not be used. 00:28:12.253 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.253 [2024-05-15 12:29:40.700240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.253 [2024-05-15 12:29:40.773406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.185 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.442 nvme0n1 00:28:13.442 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:13.442 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.442 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.443 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.443 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:13.443 12:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.700 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.701 Zero copy mechanism will not be used. 00:28:13.701 Running I/O for 2 seconds... 00:28:13.701 [2024-05-15 12:29:42.022126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.022694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.022726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.036352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.036738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.036764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.049941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.050323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.050347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.065720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.066173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.066203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.078794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.079265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.079287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.094650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.095179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.095210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.110275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.110734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.110755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.122600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.122766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.122788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.136181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.136586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.136608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.151427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.151951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.151972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.167085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.167611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.167633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.182577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.183098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.183119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.197237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.197750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.197770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.211652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.212383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.212403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.701 [2024-05-15 12:29:42.226632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.701 [2024-05-15 12:29:42.227096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.701 [2024-05-15 12:29:42.227120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.242503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.243059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.243084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.258236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.258680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.258702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.274117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.274647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.274669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.289371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.290046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.290067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.304730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.305271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.305292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.318915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.319413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.319434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.333636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.334187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.334213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.349327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.349844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.349869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.365509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.365981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.366002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.380670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.381305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.381325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.395891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.396590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.396611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.412133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.412725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.412746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.959 [2024-05-15 12:29:42.427291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.959 [2024-05-15 12:29:42.427985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.959 [2024-05-15 12:29:42.428006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.960 [2024-05-15 12:29:42.443119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.960 [2024-05-15 12:29:42.443708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.960 [2024-05-15 12:29:42.443730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.960 [2024-05-15 12:29:42.457102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.960 [2024-05-15 12:29:42.457758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.960 [2024-05-15 12:29:42.457779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.960 [2024-05-15 12:29:42.472065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.960 [2024-05-15 12:29:42.472605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.960 [2024-05-15 12:29:42.472627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.960 [2024-05-15 12:29:42.487073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:13.960 [2024-05-15 12:29:42.487506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.960 [2024-05-15 12:29:42.487530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.502947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.503473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.503497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.517755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.518391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.518412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.533140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.533657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.533680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.548778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.549442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.549464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.564370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.565094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.565114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.580488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.581197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.581218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.595986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.596425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.596446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.611747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.612308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.612329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.626066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.626653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.626673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.642281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.642851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.642872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.657820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.658336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.658357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.671884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.672506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.685764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.686264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.686285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.700795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.701200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.701221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.716377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.717040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.717061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.731944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.218 [2024-05-15 12:29:42.732651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.218 [2024-05-15 12:29:42.732672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.218 [2024-05-15 12:29:42.746976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.747578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.747607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.761534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.761946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.761970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.775774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.776426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.776447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.791246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.791770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.791792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.805475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.806314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.806335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.821200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.821681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.821701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.835761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.836157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.836179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.851265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.851788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.851808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.867111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.867707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.867729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.883674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.884334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.884355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.898951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.899473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.899494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.912640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.912934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.912954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.927201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.927832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.927853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.941713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.942234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.942256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.956999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.957563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.957584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.972028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.972548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.972568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:42.988556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:42.989208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:42.989228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.477 [2024-05-15 12:29:43.003842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.477 [2024-05-15 12:29:43.004329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.477 [2024-05-15 12:29:43.004353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.019881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.020347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.020370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.034894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.035416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.035438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.050383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.050986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.051007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.064229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.064750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.064771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.079135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.079617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.079637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.095218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.095803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.095824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.109736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.110116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.110136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.125353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.125950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.125970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.141620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.142323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.142348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.156281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.156858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.156879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.172517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.173112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.173132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.188301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.189100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.189120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.204891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.735 [2024-05-15 12:29:43.205406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.735 [2024-05-15 12:29:43.205426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.735 [2024-05-15 12:29:43.220593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.736 [2024-05-15 12:29:43.221243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.736 [2024-05-15 12:29:43.221263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.736 [2024-05-15 12:29:43.236630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.736 [2024-05-15 12:29:43.237157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.736 [2024-05-15 12:29:43.237178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.736 [2024-05-15 12:29:43.251442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.736 [2024-05-15 12:29:43.252039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.736 [2024-05-15 12:29:43.252060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.993 [2024-05-15 12:29:43.266131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.993 [2024-05-15 12:29:43.266762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.993 [2024-05-15 12:29:43.266787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.993 [2024-05-15 12:29:43.279661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.993 [2024-05-15 12:29:43.280228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.993 [2024-05-15 12:29:43.280252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.993 [2024-05-15 12:29:43.294309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.993 [2024-05-15 12:29:43.294784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.993 [2024-05-15 12:29:43.294806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.993 [2024-05-15 12:29:43.309206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.993 [2024-05-15 12:29:43.309795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.993 [2024-05-15 12:29:43.309816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.993 [2024-05-15 12:29:43.326257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.993 [2024-05-15 12:29:43.326776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.993 [2024-05-15 12:29:43.326796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.993 [2024-05-15 12:29:43.342637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.993 [2024-05-15 12:29:43.343227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.993 [2024-05-15 12:29:43.343248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.993 [2024-05-15 12:29:43.357768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.358315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.358335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.374722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.375232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.375253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.391308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.391772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.391793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.407912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.408403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.408423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.423344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.423920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.423941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.437383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.438015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.438036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.453218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.453861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.453881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.469115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.469638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.469659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.485221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.485791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.485811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.501143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.501606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.501626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.994 [2024-05-15 12:29:43.516028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:14.994 [2024-05-15 12:29:43.516470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.994 [2024-05-15 12:29:43.516491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.530002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.530622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.530646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.544839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.545400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.545422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.559579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.560175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.560200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.575901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.576564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.576585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.591710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.592301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.592322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.607144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.607660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.607682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.622794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.623324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.623344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.638133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.638702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.638724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.655054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.655618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.655640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.671228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.671849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.671870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.686466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.687199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.687235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.701447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.701931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.701952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.717627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.718448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.718469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.733755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.734364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.734385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.748020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.748611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.748633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.763565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.764075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.764096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.252 [2024-05-15 12:29:43.779103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.252 [2024-05-15 12:29:43.779769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.252 [2024-05-15 12:29:43.779793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.510 [2024-05-15 12:29:43.795169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.510 [2024-05-15 12:29:43.795746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.510 [2024-05-15 12:29:43.795771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.510 [2024-05-15 12:29:43.811080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.510 [2024-05-15 12:29:43.811730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.510 [2024-05-15 12:29:43.811760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.510 [2024-05-15 12:29:43.826967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.510 [2024-05-15 12:29:43.827481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.510 [2024-05-15 12:29:43.827502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.510 [2024-05-15 12:29:43.842080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.510 [2024-05-15 12:29:43.842603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.510 [2024-05-15 12:29:43.842625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.510 [2024-05-15 12:29:43.855955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.510 [2024-05-15 12:29:43.856540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.510 [2024-05-15 12:29:43.856561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.510 [2024-05-15 12:29:43.871203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.510 [2024-05-15 12:29:43.871796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.510 [2024-05-15 12:29:43.871816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.510 [2024-05-15 12:29:43.887109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.510 [2024-05-15 12:29:43.887580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.510 [2024-05-15 12:29:43.887603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.510 [2024-05-15 12:29:43.901621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.511 [2024-05-15 12:29:43.902205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.511 [2024-05-15 12:29:43.902226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.511 [2024-05-15 12:29:43.917749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.511 [2024-05-15 12:29:43.918391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.511 [2024-05-15 12:29:43.918411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.511 [2024-05-15 12:29:43.934434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.511 [2024-05-15 12:29:43.935015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.511 [2024-05-15 12:29:43.935036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.511 [2024-05-15 12:29:43.951347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.511 [2024-05-15 12:29:43.951862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.511 [2024-05-15 12:29:43.951884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.511 [2024-05-15 12:29:43.966224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.511 [2024-05-15 12:29:43.966719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.511 [2024-05-15 12:29:43.966740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.511 [2024-05-15 12:29:43.980516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.511 [2024-05-15 12:29:43.981128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.511 [2024-05-15 12:29:43.981149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.511 [2024-05-15 12:29:43.996649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22a9ee0) with pdu=0x2000190fef90 00:28:15.511 [2024-05-15 12:29:43.997298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.511 [2024-05-15 12:29:43.997318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.511 00:28:15.511 Latency(us) 00:28:15.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.511 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:15.511 nvme0n1 : 2.01 2015.07 251.88 0.00 0.00 7922.06 4194.30 20761.80 00:28:15.511 =================================================================================================================== 00:28:15.511 Total : 2015.07 251.88 0.00 0.00 7922.06 4194.30 20761.80 00:28:15.511 0 00:28:15.511 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:15.511 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:15.511 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:15.511 | .driver_specific 00:28:15.511 | .nvme_error 00:28:15.511 | .status_code 00:28:15.511 | .command_transient_transport_error' 00:28:15.511 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 130 > 0 )) 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2287999 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2287999 ']' 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2287999 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2287999 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2287999' 00:28:15.769 killing process with pid 2287999 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2287999 00:28:15.769 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.769 00:28:15.769 Latency(us) 00:28:15.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.769 =================================================================================================================== 00:28:15.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.769 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2287999 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2285898 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2285898 ']' 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2285898 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2285898 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2285898' 00:28:16.026 killing process with pid 2285898 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2285898 00:28:16.026 [2024-05-15 12:29:44.509122] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:16.026 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2285898 00:28:16.284 00:28:16.284 real 0m16.764s 00:28:16.284 user 0m32.125s 00:28:16.284 sys 0m4.483s 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.284 ************************************ 00:28:16.284 END TEST nvmf_digest_error 00:28:16.284 ************************************ 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.284 rmmod nvme_tcp 00:28:16.284 rmmod nvme_fabrics 00:28:16.284 rmmod nvme_keyring 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2285898 ']' 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2285898 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 2285898 ']' 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 2285898 00:28:16.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2285898) - No such process 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 2285898 is not found' 00:28:16.284 Process with pid 2285898 is not found 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.284 12:29:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.811 12:29:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:18.811 00:28:18.811 real 0m43.201s 00:28:18.811 user 1m6.940s 00:28:18.811 sys 0m14.365s 00:28:18.811 12:29:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:18.811 12:29:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:18.811 ************************************ 00:28:18.811 END TEST nvmf_digest 00:28:18.811 ************************************ 00:28:18.811 12:29:46 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:18.811 12:29:46 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:18.811 12:29:46 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:18.811 12:29:46 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:18.811 12:29:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:18.811 12:29:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:18.811 12:29:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:18.811 ************************************ 00:28:18.811 START TEST nvmf_bdevperf 00:28:18.811 ************************************ 00:28:18.811 12:29:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:18.811 * Looking for test storage... 00:28:18.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:18.811 12:29:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:25.368 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:25.368 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:25.368 Found net devices under 0000:af:00.0: cvl_0_0 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:25.368 Found net devices under 0000:af:00.1: cvl_0_1 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:25.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:28:25.368 00:28:25.368 --- 10.0.0.2 ping statistics --- 00:28:25.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.368 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:28:25.368 00:28:25.368 --- 10.0.0.1 ping statistics --- 00:28:25.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.368 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2292386 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2292386 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 2292386 ']' 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:25.368 12:29:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:25.368 [2024-05-15 12:29:53.659534] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:25.368 [2024-05-15 12:29:53.659583] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.368 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.368 [2024-05-15 12:29:53.735401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:25.368 [2024-05-15 12:29:53.809357] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.368 [2024-05-15 12:29:53.809393] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.368 [2024-05-15 12:29:53.809403] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.368 [2024-05-15 12:29:53.809411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.368 [2024-05-15 12:29:53.809420] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.368 [2024-05-15 12:29:53.809535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.368 [2024-05-15 12:29:53.809621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.368 [2024-05-15 12:29:53.809623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.300 [2024-05-15 12:29:54.513128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.300 Malloc0 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:26.300 [2024-05-15 12:29:54.577194] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:26.300 [2024-05-15 12:29:54.577435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.300 { 00:28:26.300 "params": { 00:28:26.300 "name": "Nvme$subsystem", 00:28:26.300 "trtype": "$TEST_TRANSPORT", 00:28:26.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.300 "adrfam": "ipv4", 00:28:26.300 "trsvcid": "$NVMF_PORT", 00:28:26.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.300 "hdgst": ${hdgst:-false}, 00:28:26.300 "ddgst": ${ddgst:-false} 00:28:26.300 }, 00:28:26.300 "method": "bdev_nvme_attach_controller" 00:28:26.300 } 00:28:26.300 EOF 00:28:26.300 )") 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:26.300 12:29:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.300 "params": { 00:28:26.300 "name": "Nvme1", 00:28:26.300 "trtype": "tcp", 00:28:26.300 "traddr": "10.0.0.2", 00:28:26.300 "adrfam": "ipv4", 00:28:26.300 "trsvcid": "4420", 00:28:26.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.300 "hdgst": false, 00:28:26.300 "ddgst": false 00:28:26.300 }, 00:28:26.300 "method": "bdev_nvme_attach_controller" 00:28:26.300 }' 00:28:26.300 [2024-05-15 12:29:54.627656] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:26.301 [2024-05-15 12:29:54.627700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292471 ] 00:28:26.301 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.301 [2024-05-15 12:29:54.697744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.301 [2024-05-15 12:29:54.767319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.558 Running I/O for 1 seconds... 00:28:27.928 00:28:27.928 Latency(us) 00:28:27.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.928 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:27.928 Verification LBA range: start 0x0 length 0x4000 00:28:27.928 Nvme1n1 : 1.00 11434.22 44.66 0.00 0.00 11143.25 1146.88 28730.98 00:28:27.928 =================================================================================================================== 00:28:27.929 Total : 11434.22 44.66 0.00 0.00 11143.25 1146.88 28730.98 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2292746 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.929 { 00:28:27.929 "params": { 00:28:27.929 "name": "Nvme$subsystem", 00:28:27.929 "trtype": "$TEST_TRANSPORT", 00:28:27.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.929 "adrfam": "ipv4", 00:28:27.929 "trsvcid": "$NVMF_PORT", 00:28:27.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.929 "hdgst": ${hdgst:-false}, 00:28:27.929 "ddgst": ${ddgst:-false} 00:28:27.929 }, 00:28:27.929 "method": "bdev_nvme_attach_controller" 00:28:27.929 } 00:28:27.929 EOF 00:28:27.929 )") 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:27.929 12:29:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:27.929 "params": { 00:28:27.929 "name": "Nvme1", 00:28:27.929 "trtype": "tcp", 00:28:27.929 "traddr": "10.0.0.2", 00:28:27.929 "adrfam": "ipv4", 00:28:27.929 "trsvcid": "4420", 00:28:27.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.929 "hdgst": false, 00:28:27.929 "ddgst": false 00:28:27.929 }, 00:28:27.929 "method": "bdev_nvme_attach_controller" 00:28:27.929 }' 00:28:27.929 [2024-05-15 12:29:56.339226] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:27.929 [2024-05-15 12:29:56.339280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292746 ] 00:28:27.929 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.929 [2024-05-15 12:29:56.410135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.194 [2024-05-15 12:29:56.481410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.460 Running I/O for 15 seconds... 00:28:30.987 12:29:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2292386 00:28:30.987 12:29:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:30.987 [2024-05-15 12:29:59.312661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.312971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.312989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.987 [2024-05-15 12:29:59.313746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.987 [2024-05-15 12:29:59.313763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.313776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.313791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.313803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.313818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.313831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.313846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.313859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.313873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.313888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.313904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.313917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.313932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.313945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.313960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.313973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.313988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.988 [2024-05-15 12:29:59.314569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.988 [2024-05-15 12:29:59.314793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.988 [2024-05-15 12:29:59.314820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.988 [2024-05-15 12:29:59.314850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.988 [2024-05-15 12:29:59.314878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.988 [2024-05-15 12:29:59.314905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.988 [2024-05-15 12:29:59.314920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.314933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.314948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.314961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.314975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.314989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.989 [2024-05-15 12:29:59.315474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.315978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.315993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.316005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.316020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.316033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.989 [2024-05-15 12:29:59.316048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.989 [2024-05-15 12:29:59.316061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.990 [2024-05-15 12:29:59.316089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.990 [2024-05-15 12:29:59.316117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.990 [2024-05-15 12:29:59.316145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.990 [2024-05-15 12:29:59.316174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.990 [2024-05-15 12:29:59.316207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.990 [2024-05-15 12:29:59.316236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.990 [2024-05-15 12:29:59.316266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.990 [2024-05-15 12:29:59.316294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.990 [2024-05-15 12:29:59.316322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e89610 is same with the state(5) to be set 00:28:30.990 [2024-05-15 12:29:59.316352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:30.990 [2024-05-15 12:29:59.316364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:30.990 [2024-05-15 12:29:59.316375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110712 len:8 PRP1 0x0 PRP2 0x0 00:28:30.990 [2024-05-15 12:29:59.316388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316444] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e89610 was disconnected and freed. reset controller. 00:28:30.990 [2024-05-15 12:29:59.316503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.990 [2024-05-15 12:29:59.316519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.990 [2024-05-15 12:29:59.316546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.990 [2024-05-15 12:29:59.316575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:30.990 [2024-05-15 12:29:59.316601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.990 [2024-05-15 12:29:59.316613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.990 [2024-05-15 12:29:59.319747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.990 [2024-05-15 12:29:59.319783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.990 [2024-05-15 12:29:59.320695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.321248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.321301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.990 [2024-05-15 12:29:59.321351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.990 [2024-05-15 12:29:59.321887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.990 [2024-05-15 12:29:59.322071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.990 [2024-05-15 12:29:59.322084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.990 [2024-05-15 12:29:59.322100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.990 [2024-05-15 12:29:59.324817] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.990 [2024-05-15 12:29:59.332894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.990 [2024-05-15 12:29:59.333491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.334020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.334056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.990 [2024-05-15 12:29:59.334071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.990 [2024-05-15 12:29:59.334260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.990 [2024-05-15 12:29:59.334434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.990 [2024-05-15 12:29:59.334446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.990 [2024-05-15 12:29:59.334459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.990 [2024-05-15 12:29:59.337016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.990 [2024-05-15 12:29:59.345713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.990 [2024-05-15 12:29:59.346339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.346544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.346558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.990 [2024-05-15 12:29:59.346571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.990 [2024-05-15 12:29:59.346742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.990 [2024-05-15 12:29:59.346906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.990 [2024-05-15 12:29:59.346917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.990 [2024-05-15 12:29:59.346929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.990 [2024-05-15 12:29:59.349466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.990 [2024-05-15 12:29:59.358567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.990 [2024-05-15 12:29:59.359151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.359698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.359747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.990 [2024-05-15 12:29:59.359795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.990 [2024-05-15 12:29:59.360448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.990 [2024-05-15 12:29:59.360974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.990 [2024-05-15 12:29:59.360989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.990 [2024-05-15 12:29:59.361001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.990 [2024-05-15 12:29:59.363528] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.990 [2024-05-15 12:29:59.371388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.990 [2024-05-15 12:29:59.372056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.372587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.372639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.990 [2024-05-15 12:29:59.372687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.990 [2024-05-15 12:29:59.373081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.990 [2024-05-15 12:29:59.373335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.990 [2024-05-15 12:29:59.373352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.990 [2024-05-15 12:29:59.373369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.990 [2024-05-15 12:29:59.377166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.990 [2024-05-15 12:29:59.385097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.990 [2024-05-15 12:29:59.385763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.386270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.990 [2024-05-15 12:29:59.386321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.990 [2024-05-15 12:29:59.386368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.990 [2024-05-15 12:29:59.386792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.990 [2024-05-15 12:29:59.386965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.386977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.991 [2024-05-15 12:29:59.386989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.991 [2024-05-15 12:29:59.389590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.991 [2024-05-15 12:29:59.397845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.991 [2024-05-15 12:29:59.398514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.398981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.399030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.991 [2024-05-15 12:29:59.399076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.991 [2024-05-15 12:29:59.399734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.991 [2024-05-15 12:29:59.400108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.400120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.991 [2024-05-15 12:29:59.400137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.991 [2024-05-15 12:29:59.402675] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.991 [2024-05-15 12:29:59.410533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.991 [2024-05-15 12:29:59.411154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.411708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.411764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.991 [2024-05-15 12:29:59.411777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.991 [2024-05-15 12:29:59.411947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.991 [2024-05-15 12:29:59.412111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.412122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.991 [2024-05-15 12:29:59.412134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.991 [2024-05-15 12:29:59.414721] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.991 [2024-05-15 12:29:59.423259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.991 [2024-05-15 12:29:59.423888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.424391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.424442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.991 [2024-05-15 12:29:59.424490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.991 [2024-05-15 12:29:59.424974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.991 [2024-05-15 12:29:59.425138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.425149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.991 [2024-05-15 12:29:59.425161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.991 [2024-05-15 12:29:59.427694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.991 [2024-05-15 12:29:59.436155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.991 [2024-05-15 12:29:59.436815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.437344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.437395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.991 [2024-05-15 12:29:59.437443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.991 [2024-05-15 12:29:59.437959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.991 [2024-05-15 12:29:59.438133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.438145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.991 [2024-05-15 12:29:59.438157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.991 [2024-05-15 12:29:59.440680] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.991 [2024-05-15 12:29:59.448901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.991 [2024-05-15 12:29:59.449534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.450012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.450061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.991 [2024-05-15 12:29:59.450110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.991 [2024-05-15 12:29:59.450766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.991 [2024-05-15 12:29:59.451233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.451245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.991 [2024-05-15 12:29:59.451257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.991 [2024-05-15 12:29:59.453818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.991 [2024-05-15 12:29:59.461675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.991 [2024-05-15 12:29:59.462322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.462845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.462894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.991 [2024-05-15 12:29:59.462943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.991 [2024-05-15 12:29:59.463344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.991 [2024-05-15 12:29:59.463517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.463529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.991 [2024-05-15 12:29:59.463541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.991 [2024-05-15 12:29:59.466088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.991 [2024-05-15 12:29:59.474451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.991 [2024-05-15 12:29:59.475112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.475640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.475690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.991 [2024-05-15 12:29:59.475739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.991 [2024-05-15 12:29:59.476066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.991 [2024-05-15 12:29:59.476245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.476257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.991 [2024-05-15 12:29:59.476272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.991 [2024-05-15 12:29:59.478779] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.991 [2024-05-15 12:29:59.487230] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.991 [2024-05-15 12:29:59.487859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.488380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.991 [2024-05-15 12:29:59.488395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.991 [2024-05-15 12:29:59.488408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.991 [2024-05-15 12:29:59.488579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.991 [2024-05-15 12:29:59.488743] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.991 [2024-05-15 12:29:59.488754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.992 [2024-05-15 12:29:59.488766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.992 [2024-05-15 12:29:59.491265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.992 [2024-05-15 12:29:59.499954] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:30.992 [2024-05-15 12:29:59.500609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.992 [2024-05-15 12:29:59.501137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.992 [2024-05-15 12:29:59.501185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:30.992 [2024-05-15 12:29:59.501252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:30.992 [2024-05-15 12:29:59.501892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:30.992 [2024-05-15 12:29:59.502381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:30.992 [2024-05-15 12:29:59.502393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:30.992 [2024-05-15 12:29:59.502405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:30.992 [2024-05-15 12:29:59.504902] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:30.992 [2024-05-15 12:29:59.513005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.513670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.514105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.514153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.514223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.514423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.514601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.514613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.514626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.517239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.525982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.526639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.527042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.527091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.527139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.527523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.527698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.527711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.527723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.530268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.538787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.539440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.539891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.539939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.539987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.540521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.540695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.540707] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.540720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.543350] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.551601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.552264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.552796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.552845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.552893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.553452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.553627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.553639] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.553651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.556199] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.564457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.565123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.565572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.565591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.565605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.565793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.565971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.565983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.565996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.568714] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.577488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.578126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.578443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.578459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.578473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.578659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.578838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.578850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.578863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.581585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.590383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.591036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.591486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.591535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.591593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.591773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.591946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.591958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.591970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.594615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.603347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.604007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.604544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.604596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.604652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.604910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.605089] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.605101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.605114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.607843] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.616152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.616798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.617328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.617381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.617428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.617886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.618051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.618062] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.618074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.620666] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.628974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.629618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.630078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.630126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.630174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.250 [2024-05-15 12:29:59.630798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.250 [2024-05-15 12:29:59.630973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.250 [2024-05-15 12:29:59.630985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.250 [2024-05-15 12:29:59.630998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.250 [2024-05-15 12:29:59.633527] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.250 [2024-05-15 12:29:59.641824] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.250 [2024-05-15 12:29:59.643304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.643733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.250 [2024-05-15 12:29:59.643787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.250 [2024-05-15 12:29:59.643838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.644516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.644883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.644895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.644907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.647449] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.654703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.655349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.655811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.655862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.655911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.656448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.656624] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.656636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.656649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.660251] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.668253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.668881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.669406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.669457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.669505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.670144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.670673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.670686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.670697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.673200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.681006] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.681576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.682053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.682102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.682151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.682727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.682896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.682908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.682920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.685453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.693775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.694353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.694785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.694835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.694883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.695541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.695872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.695884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.695895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.698418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.706601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.707246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.707651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.707666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.707680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.707862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.708035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.708047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.708060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.710622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.719359] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.719924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.720374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.720427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.720474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.720710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.720886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.720902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.720914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.723435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.732200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.732827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.733184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.733246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.733295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.733773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.733937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.733949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.733961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.736613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.745055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.745676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.746147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.746206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.746257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.746838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.747004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.747016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.747028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.750725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.758627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.759285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.759811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.759860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.759908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.760563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.760907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.760919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.760936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.763527] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.251 [2024-05-15 12:29:59.771538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.251 [2024-05-15 12:29:59.772178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.772642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.251 [2024-05-15 12:29:59.772683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.251 [2024-05-15 12:29:59.772697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.251 [2024-05-15 12:29:59.772868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.251 [2024-05-15 12:29:59.773034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.251 [2024-05-15 12:29:59.773045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.251 [2024-05-15 12:29:59.773057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.251 [2024-05-15 12:29:59.775694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.784445] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.784956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.785343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.785394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.785443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.785914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.786079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.786090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.786102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.511 [2024-05-15 12:29:59.788728] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.797213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.797814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.798320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.798373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.798419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.799052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.799221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.799233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.799245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.511 [2024-05-15 12:29:59.801840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.810090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.810611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.811058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.811107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.811153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.811807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.812122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.812134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.812146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.511 [2024-05-15 12:29:59.814738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.823038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.823718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.824284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.824335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.824384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.824866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.825040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.825052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.825064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.511 [2024-05-15 12:29:59.827708] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.835955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.836613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.837071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.837120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.837168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.837825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.838300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.838311] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.838323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.511 [2024-05-15 12:29:59.841912] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.849740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.850327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.850841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.850890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.850937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.851159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.851330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.851343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.851354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.511 [2024-05-15 12:29:59.853934] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.862514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.863140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.863531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.863582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.863628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.864089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.864258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.864270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.864282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.511 [2024-05-15 12:29:59.866825] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.875303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.875960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.876435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.876486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.876534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.877045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.877215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.877227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.877239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.511 [2024-05-15 12:29:59.879824] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.511 [2024-05-15 12:29:59.888121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.511 [2024-05-15 12:29:59.888780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.889292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.511 [2024-05-15 12:29:59.889342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.511 [2024-05-15 12:29:59.889389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.511 [2024-05-15 12:29:59.890028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.511 [2024-05-15 12:29:59.890399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.511 [2024-05-15 12:29:59.890416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.511 [2024-05-15 12:29:59.890433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.894234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:29:59.901256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:29:59.901902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.902337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.902387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:29:59.902432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:29:59.903079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:29:59.903259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:29:59.903271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:29:59.903284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.905923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:29:59.913972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:29:59.914575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.915136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.915185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:29:59.915243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:29:59.915884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:29:59.916357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:29:59.916368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:29:59.916381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.918935] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:29:59.926680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:29:59.927327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.927808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.927858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:29:59.927906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:29:59.928196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:29:59.928379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:29:59.928390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:29:59.928403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.930899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:29:59.939523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:29:59.940167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.940690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.940739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:29:59.940787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:29:59.941414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:29:59.941580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:29:59.941591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:29:59.941603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.944098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:29:59.952234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:29:59.952867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.953317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.953367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:29:59.953413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:29:59.954051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:29:59.954272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:29:59.954283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:29:59.954296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.956840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:29:59.965112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:29:59.965767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.966228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.966278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:29:59.966336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:29:59.966737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:29:59.966910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:29:59.966922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:29:59.966934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.969468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:29:59.977904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:29:59.978499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.978886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.978936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:29:59.978982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:29:59.979295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:29:59.979468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:29:59.979480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:29:59.979493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.982052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:29:59.990872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:29:59.991445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.991969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:29:59.992018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:29:59.992066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:29:59.992592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:29:59.992768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:29:59.992780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:29:59.992793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:29:59.995332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:30:00.004001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.512 [2024-05-15 12:30:00.004635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:30:00.005167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.512 [2024-05-15 12:30:00.005206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.512 [2024-05-15 12:30:00.005282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.512 [2024-05-15 12:30:00.005592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.512 [2024-05-15 12:30:00.005790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.512 [2024-05-15 12:30:00.005805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.512 [2024-05-15 12:30:00.005820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.512 [2024-05-15 12:30:00.008813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.512 [2024-05-15 12:30:00.017098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.513 [2024-05-15 12:30:00.017763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.513 [2024-05-15 12:30:00.018101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.513 [2024-05-15 12:30:00.018116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.513 [2024-05-15 12:30:00.018131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.513 [2024-05-15 12:30:00.018323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.513 [2024-05-15 12:30:00.018502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.513 [2024-05-15 12:30:00.018514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.513 [2024-05-15 12:30:00.018527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.513 [2024-05-15 12:30:00.021244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.513 [2024-05-15 12:30:00.030008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.513 [2024-05-15 12:30:00.030607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.513 [2024-05-15 12:30:00.030992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.513 [2024-05-15 12:30:00.031009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.513 [2024-05-15 12:30:00.031024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.513 [2024-05-15 12:30:00.031227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.513 [2024-05-15 12:30:00.031461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.513 [2024-05-15 12:30:00.031474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.513 [2024-05-15 12:30:00.031489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.513 [2024-05-15 12:30:00.034471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.772 [2024-05-15 12:30:00.042900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.772 [2024-05-15 12:30:00.043424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.772 [2024-05-15 12:30:00.043854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.772 [2024-05-15 12:30:00.043871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.772 [2024-05-15 12:30:00.043885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.772 [2024-05-15 12:30:00.044070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.772 [2024-05-15 12:30:00.044258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.772 [2024-05-15 12:30:00.044271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.772 [2024-05-15 12:30:00.044284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.772 [2024-05-15 12:30:00.047007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.772 [2024-05-15 12:30:00.055835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.772 [2024-05-15 12:30:00.056565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.772 [2024-05-15 12:30:00.056990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.772 [2024-05-15 12:30:00.057061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.772 [2024-05-15 12:30:00.057095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.772 [2024-05-15 12:30:00.057499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.772 [2024-05-15 12:30:00.057786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.772 [2024-05-15 12:30:00.057811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.772 [2024-05-15 12:30:00.057882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.772 [2024-05-15 12:30:00.060859] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.772 [2024-05-15 12:30:00.068826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.772 [2024-05-15 12:30:00.069478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.772 [2024-05-15 12:30:00.069864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.772 [2024-05-15 12:30:00.069880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.772 [2024-05-15 12:30:00.069894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.772 [2024-05-15 12:30:00.070080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.772 [2024-05-15 12:30:00.070263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.772 [2024-05-15 12:30:00.070276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.772 [2024-05-15 12:30:00.070289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.772 [2024-05-15 12:30:00.073007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.772 [2024-05-15 12:30:00.081780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.772 [2024-05-15 12:30:00.082418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.082875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.082891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.082905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.083090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.083274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.083291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.083304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.086016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.094790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.095310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.095684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.095699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.095713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.095925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.096104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.096117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.096130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.098853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.107802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.108371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.108804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.108818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.108832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.109003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.109168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.109179] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.109197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.111884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.120783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.121463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.121811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.121859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.121908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.122561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.123013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.123025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.123043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.125730] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.133713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.134278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.134661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.134677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.134692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.134889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.135067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.135079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.135092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.137818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.146714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.147383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.147893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.147942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.147992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.148674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.149004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.149016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.149029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.151769] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.159650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.160341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.160749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.160775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.160788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.160990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.161168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.161180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.161198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.163937] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.172662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.173296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.173679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.173694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.173708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.173893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.174071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.174083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.174096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.176867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.185708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.186343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.186746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.186761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.186776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.186961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.773 [2024-05-15 12:30:00.187139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.773 [2024-05-15 12:30:00.187151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.773 [2024-05-15 12:30:00.187164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.773 [2024-05-15 12:30:00.189879] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.773 [2024-05-15 12:30:00.198674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.773 [2024-05-15 12:30:00.199312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.199716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.773 [2024-05-15 12:30:00.199731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.773 [2024-05-15 12:30:00.199745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.773 [2024-05-15 12:30:00.199930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.774 [2024-05-15 12:30:00.200108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.774 [2024-05-15 12:30:00.200120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.774 [2024-05-15 12:30:00.200133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.774 [2024-05-15 12:30:00.202854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.774 [2024-05-15 12:30:00.211648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.774 [2024-05-15 12:30:00.212314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.212821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.212870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.774 [2024-05-15 12:30:00.212917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.774 [2024-05-15 12:30:00.213579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.774 [2024-05-15 12:30:00.214166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.774 [2024-05-15 12:30:00.214183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.774 [2024-05-15 12:30:00.214208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.774 [2024-05-15 12:30:00.218014] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.774 [2024-05-15 12:30:00.225061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.774 [2024-05-15 12:30:00.225700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.226079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.226094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.774 [2024-05-15 12:30:00.226108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.774 [2024-05-15 12:30:00.226301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.774 [2024-05-15 12:30:00.226479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.774 [2024-05-15 12:30:00.226492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.774 [2024-05-15 12:30:00.226505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.774 [2024-05-15 12:30:00.229242] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.774 [2024-05-15 12:30:00.238027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.774 [2024-05-15 12:30:00.238565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.238959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.239008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.774 [2024-05-15 12:30:00.239050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.774 [2024-05-15 12:30:00.239242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.774 [2024-05-15 12:30:00.239423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.774 [2024-05-15 12:30:00.239435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.774 [2024-05-15 12:30:00.239449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.774 [2024-05-15 12:30:00.242165] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.774 [2024-05-15 12:30:00.250975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.774 [2024-05-15 12:30:00.251592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.252068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.252117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.774 [2024-05-15 12:30:00.252166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.774 [2024-05-15 12:30:00.252820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.774 [2024-05-15 12:30:00.253241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.774 [2024-05-15 12:30:00.253254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.774 [2024-05-15 12:30:00.253268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.774 [2024-05-15 12:30:00.257055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.774 [2024-05-15 12:30:00.264688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.774 [2024-05-15 12:30:00.265329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.265862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.265910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.774 [2024-05-15 12:30:00.265957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.774 [2024-05-15 12:30:00.266211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.774 [2024-05-15 12:30:00.266391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.774 [2024-05-15 12:30:00.266403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.774 [2024-05-15 12:30:00.266416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.774 [2024-05-15 12:30:00.269088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.774 [2024-05-15 12:30:00.277620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.774 [2024-05-15 12:30:00.278262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.278762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.278810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.774 [2024-05-15 12:30:00.278857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.774 [2024-05-15 12:30:00.279305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.774 [2024-05-15 12:30:00.279481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.774 [2024-05-15 12:30:00.279493] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.774 [2024-05-15 12:30:00.279507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.774 [2024-05-15 12:30:00.282181] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.774 [2024-05-15 12:30:00.290576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.774 [2024-05-15 12:30:00.291194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.291579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.774 [2024-05-15 12:30:00.291629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:31.774 [2024-05-15 12:30:00.291678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:31.774 [2024-05-15 12:30:00.292127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:31.774 [2024-05-15 12:30:00.292304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.774 [2024-05-15 12:30:00.292316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.774 [2024-05-15 12:30:00.292329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.774 [2024-05-15 12:30:00.295032] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.033 [2024-05-15 12:30:00.303533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.033 [2024-05-15 12:30:00.304168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.033 [2024-05-15 12:30:00.304514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.033 [2024-05-15 12:30:00.304565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.033 [2024-05-15 12:30:00.304613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.033 [2024-05-15 12:30:00.305269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.033 [2024-05-15 12:30:00.305573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.033 [2024-05-15 12:30:00.305585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.033 [2024-05-15 12:30:00.305598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.033 [2024-05-15 12:30:00.308326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.033 [2024-05-15 12:30:00.316489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.033 [2024-05-15 12:30:00.317177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.033 [2024-05-15 12:30:00.317524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.033 [2024-05-15 12:30:00.317577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.033 [2024-05-15 12:30:00.317626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.033 [2024-05-15 12:30:00.318108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.033 [2024-05-15 12:30:00.318291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.033 [2024-05-15 12:30:00.318304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.033 [2024-05-15 12:30:00.318317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.033 [2024-05-15 12:30:00.320994] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.033 [2024-05-15 12:30:00.329462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.033 [2024-05-15 12:30:00.330117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.033 [2024-05-15 12:30:00.330575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.033 [2024-05-15 12:30:00.330627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.033 [2024-05-15 12:30:00.330686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.033 [2024-05-15 12:30:00.331147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.033 [2024-05-15 12:30:00.331332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.033 [2024-05-15 12:30:00.331346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.033 [2024-05-15 12:30:00.331359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.033 [2024-05-15 12:30:00.334095] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.033 [2024-05-15 12:30:00.342442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.343077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.343533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.343584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.343633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.344283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.344652] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.344664] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.344678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.348308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.356046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.356712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.357169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.357234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.357285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.357650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.357823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.357835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.357847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.360513] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.369003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.369660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.370141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.370208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.370222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.370422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.370601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.370613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.370626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.373377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.381947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.382613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.383126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.383175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.383242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.383884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.384547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.384562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.384576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.387249] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.394897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.395471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.395927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.395977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.396026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.396443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.396617] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.396629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.396642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.399265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.407946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.408588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.408979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.409029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.409078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.409597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.409780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.409793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.409806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.412555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.420845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.421539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.421989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.422038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.422086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.422359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.422534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.422546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.422559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.425258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.433872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.434472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.434995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.435044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.435092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.435695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.435945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.435962] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.435980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.439771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.447365] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.448017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.448533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.448581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.448595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.448789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.448965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.034 [2024-05-15 12:30:00.448977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.034 [2024-05-15 12:30:00.448989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.034 [2024-05-15 12:30:00.451636] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.034 [2024-05-15 12:30:00.460240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.034 [2024-05-15 12:30:00.460884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.461434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.034 [2024-05-15 12:30:00.461485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.034 [2024-05-15 12:30:00.461532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.034 [2024-05-15 12:30:00.462174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.034 [2024-05-15 12:30:00.462700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.035 [2024-05-15 12:30:00.462712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.035 [2024-05-15 12:30:00.462726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.035 [2024-05-15 12:30:00.465457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.035 [2024-05-15 12:30:00.473169] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.035 [2024-05-15 12:30:00.473818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.474342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.474393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.035 [2024-05-15 12:30:00.474440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.035 [2024-05-15 12:30:00.474830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.035 [2024-05-15 12:30:00.475014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.035 [2024-05-15 12:30:00.475026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.035 [2024-05-15 12:30:00.475039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.035 [2024-05-15 12:30:00.477730] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.035 [2024-05-15 12:30:00.486132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.035 [2024-05-15 12:30:00.486804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.487266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.487316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.035 [2024-05-15 12:30:00.487362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.035 [2024-05-15 12:30:00.487786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.035 [2024-05-15 12:30:00.487965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.035 [2024-05-15 12:30:00.487977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.035 [2024-05-15 12:30:00.487994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.035 [2024-05-15 12:30:00.490658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.035 [2024-05-15 12:30:00.499042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.035 [2024-05-15 12:30:00.499729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.500181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.500247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.035 [2024-05-15 12:30:00.500297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.035 [2024-05-15 12:30:00.500749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.035 [2024-05-15 12:30:00.500924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.035 [2024-05-15 12:30:00.500936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.035 [2024-05-15 12:30:00.500948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.035 [2024-05-15 12:30:00.503700] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.035 [2024-05-15 12:30:00.512081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.035 [2024-05-15 12:30:00.512737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.513270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.513321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.035 [2024-05-15 12:30:00.513368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.035 [2024-05-15 12:30:00.513920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.035 [2024-05-15 12:30:00.514094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.035 [2024-05-15 12:30:00.514106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.035 [2024-05-15 12:30:00.514119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.035 [2024-05-15 12:30:00.516811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.035 [2024-05-15 12:30:00.525126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.035 [2024-05-15 12:30:00.525808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.526365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.526417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.035 [2024-05-15 12:30:00.526463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.035 [2024-05-15 12:30:00.526928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.035 [2024-05-15 12:30:00.527176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.035 [2024-05-15 12:30:00.527199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.035 [2024-05-15 12:30:00.527222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.035 [2024-05-15 12:30:00.531037] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.035 [2024-05-15 12:30:00.538618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.035 [2024-05-15 12:30:00.539270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.539660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.539709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.035 [2024-05-15 12:30:00.539757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.035 [2024-05-15 12:30:00.540411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.035 [2024-05-15 12:30:00.540877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.035 [2024-05-15 12:30:00.540889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.035 [2024-05-15 12:30:00.540902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.035 [2024-05-15 12:30:00.543595] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.035 [2024-05-15 12:30:00.551443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.035 [2024-05-15 12:30:00.552101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.552608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.035 [2024-05-15 12:30:00.552659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.035 [2024-05-15 12:30:00.552708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.035 [2024-05-15 12:30:00.553098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.035 [2024-05-15 12:30:00.553270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.035 [2024-05-15 12:30:00.553282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.035 [2024-05-15 12:30:00.553309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.035 [2024-05-15 12:30:00.555923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.294 [2024-05-15 12:30:00.564477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.294 [2024-05-15 12:30:00.565123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-05-15 12:30:00.565658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-05-15 12:30:00.565708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.294 [2024-05-15 12:30:00.565757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.294 [2024-05-15 12:30:00.566071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.294 [2024-05-15 12:30:00.566270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.294 [2024-05-15 12:30:00.566283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.294 [2024-05-15 12:30:00.566296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.294 [2024-05-15 12:30:00.569005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.294 [2024-05-15 12:30:00.577257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.294 [2024-05-15 12:30:00.577785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-05-15 12:30:00.578163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-05-15 12:30:00.578178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.294 [2024-05-15 12:30:00.578198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.294 [2024-05-15 12:30:00.578384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.294 [2024-05-15 12:30:00.578563] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.294 [2024-05-15 12:30:00.578575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.294 [2024-05-15 12:30:00.578589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.294 [2024-05-15 12:30:00.581313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.294 [2024-05-15 12:30:00.590209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.294 [2024-05-15 12:30:00.590819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-05-15 12:30:00.591279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-05-15 12:30:00.591329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.294 [2024-05-15 12:30:00.591377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.294 [2024-05-15 12:30:00.591807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.294 [2024-05-15 12:30:00.591980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.294 [2024-05-15 12:30:00.591992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.294 [2024-05-15 12:30:00.592004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.294 [2024-05-15 12:30:00.594646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.603097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.603625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.604172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.604234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.604283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.604770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.604943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.604955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.604967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.607504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.615894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.616535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.616953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.617001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.617047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.617703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.618155] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.618172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.618198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.622003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.629459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.630109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.630629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.630679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.630726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.630986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.631151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.631162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.631174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.633764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.642342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.642997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.643466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.643515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.643563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.643967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.644132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.644143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.644155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.646780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.655270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.655927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.656469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.656520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.656569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.657078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.657257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.657270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.657283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.659791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.668033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.668577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.668904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.668952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.669001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.669264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.669444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.669455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.669467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.671963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.680859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.681501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.681960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.681975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.681989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.682169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.682359] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.682371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.682382] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.684882] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.693641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.694208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.694620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.694670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.694727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.695082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.695271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.695283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.695295] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.697856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.706385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.706965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.707410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.707425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.707439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.707618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.707792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.707804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.707817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.710619] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.719311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.719847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.720388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.720440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.720489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.721034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.721226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.721239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.721252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.723856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.732106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.732693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.733005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.733054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.733104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.295 [2024-05-15 12:30:00.733771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.295 [2024-05-15 12:30:00.734309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.295 [2024-05-15 12:30:00.734320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.295 [2024-05-15 12:30:00.734332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.295 [2024-05-15 12:30:00.736936] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.295 [2024-05-15 12:30:00.744879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.295 [2024-05-15 12:30:00.745522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.745749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-05-15 12:30:00.745764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.295 [2024-05-15 12:30:00.745777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.296 [2024-05-15 12:30:00.745948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.296 [2024-05-15 12:30:00.746112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.296 [2024-05-15 12:30:00.746123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.296 [2024-05-15 12:30:00.746135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.296 [2024-05-15 12:30:00.748731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.296 [2024-05-15 12:30:00.757794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.296 [2024-05-15 12:30:00.758417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.758920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.758969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.296 [2024-05-15 12:30:00.759018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.296 [2024-05-15 12:30:00.759413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.296 [2024-05-15 12:30:00.759663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.296 [2024-05-15 12:30:00.759680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.296 [2024-05-15 12:30:00.759698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.296 [2024-05-15 12:30:00.763500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.296 [2024-05-15 12:30:00.771104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.296 [2024-05-15 12:30:00.771738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.771969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.772018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.296 [2024-05-15 12:30:00.772066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.296 [2024-05-15 12:30:00.772298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.296 [2024-05-15 12:30:00.772477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.296 [2024-05-15 12:30:00.772488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.296 [2024-05-15 12:30:00.772500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.296 [2024-05-15 12:30:00.775082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.296 [2024-05-15 12:30:00.783964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.296 [2024-05-15 12:30:00.784530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.784923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.784937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.296 [2024-05-15 12:30:00.784951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.296 [2024-05-15 12:30:00.785131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.296 [2024-05-15 12:30:00.785309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.296 [2024-05-15 12:30:00.785321] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.296 [2024-05-15 12:30:00.785334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.296 [2024-05-15 12:30:00.788006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.296 [2024-05-15 12:30:00.796891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.296 [2024-05-15 12:30:00.797549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.797976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.797991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.296 [2024-05-15 12:30:00.798005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.296 [2024-05-15 12:30:00.798186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.296 [2024-05-15 12:30:00.798367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.296 [2024-05-15 12:30:00.798379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.296 [2024-05-15 12:30:00.798392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.296 [2024-05-15 12:30:00.801088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.296 [2024-05-15 12:30:00.809745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.296 [2024-05-15 12:30:00.810394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.810768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-05-15 12:30:00.810782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.296 [2024-05-15 12:30:00.810794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.296 [2024-05-15 12:30:00.810967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.296 [2024-05-15 12:30:00.811137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.296 [2024-05-15 12:30:00.811148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.296 [2024-05-15 12:30:00.811160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.296 [2024-05-15 12:30:00.813830] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.557 [2024-05-15 12:30:00.822797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.557 [2024-05-15 12:30:00.823348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.823814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.823864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.557 [2024-05-15 12:30:00.823912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.557 [2024-05-15 12:30:00.824358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.557 [2024-05-15 12:30:00.824538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.557 [2024-05-15 12:30:00.824551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.557 [2024-05-15 12:30:00.824564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.557 [2024-05-15 12:30:00.827306] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.557 [2024-05-15 12:30:00.835751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.557 [2024-05-15 12:30:00.836384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.836814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.836830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.557 [2024-05-15 12:30:00.836844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.557 [2024-05-15 12:30:00.837030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.557 [2024-05-15 12:30:00.837214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.557 [2024-05-15 12:30:00.837227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.557 [2024-05-15 12:30:00.837241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.557 [2024-05-15 12:30:00.839954] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.557 [2024-05-15 12:30:00.848729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.557 [2024-05-15 12:30:00.849387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.849764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.849779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.557 [2024-05-15 12:30:00.849793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.557 [2024-05-15 12:30:00.849973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.557 [2024-05-15 12:30:00.850146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.557 [2024-05-15 12:30:00.850162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.557 [2024-05-15 12:30:00.850175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.557 [2024-05-15 12:30:00.852907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.557 [2024-05-15 12:30:00.861736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.557 [2024-05-15 12:30:00.862392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.862840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.862855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.557 [2024-05-15 12:30:00.862869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.557 [2024-05-15 12:30:00.863050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.557 [2024-05-15 12:30:00.863247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.557 [2024-05-15 12:30:00.863260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.557 [2024-05-15 12:30:00.863274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.557 [2024-05-15 12:30:00.865922] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.557 [2024-05-15 12:30:00.874599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.557 [2024-05-15 12:30:00.875236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.875568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.875605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.557 [2024-05-15 12:30:00.875619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.557 [2024-05-15 12:30:00.875799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.557 [2024-05-15 12:30:00.875972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.557 [2024-05-15 12:30:00.875984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.557 [2024-05-15 12:30:00.875997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.557 [2024-05-15 12:30:00.878702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.557 [2024-05-15 12:30:00.887573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.557 [2024-05-15 12:30:00.888216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.888740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.557 [2024-05-15 12:30:00.888789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.557 [2024-05-15 12:30:00.888835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.557 [2024-05-15 12:30:00.889409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.557 [2024-05-15 12:30:00.889589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.557 [2024-05-15 12:30:00.889601] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.557 [2024-05-15 12:30:00.889618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.557 [2024-05-15 12:30:00.892284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:00.900628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:00.901144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.901618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.901633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:00.901647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:00.901832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:00.902011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:00.902023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:00.902036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:00.904746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:00.913628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:00.914209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.914613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.914628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:00.914643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:00.914829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:00.915008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:00.915020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:00.915033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:00.917748] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:00.926671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:00.927113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.927459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.927510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:00.927555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:00.928031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:00.928213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:00.928226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:00.928239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:00.930959] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:00.939756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:00.940337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.940565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.940614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:00.940662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:00.941314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:00.941900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:00.941912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:00.941926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:00.945591] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:00.953515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:00.954165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.954673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.954723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:00.954769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:00.955150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:00.955336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:00.955349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:00.955362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:00.958064] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:00.966564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:00.967200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.967660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.967708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:00.967756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:00.968408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:00.968974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:00.968986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:00.968999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:00.971739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:00.979563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:00.980230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.980684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.980732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:00.980781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:00.981433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:00.981944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:00.981957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:00.981969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:00.985602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:00.993367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:00.994012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.994538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:00.994588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:00.994637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:00.995155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:00.995363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:00.995376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:00.995389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:00.998097] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:01.006282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:01.006870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:01.007375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:01.007425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:01.007472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:01.008057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.558 [2024-05-15 12:30:01.008239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.558 [2024-05-15 12:30:01.008252] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.558 [2024-05-15 12:30:01.008265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.558 [2024-05-15 12:30:01.011006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.558 [2024-05-15 12:30:01.019293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.558 [2024-05-15 12:30:01.019861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:01.020168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.558 [2024-05-15 12:30:01.020230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.558 [2024-05-15 12:30:01.020279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.558 [2024-05-15 12:30:01.020918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.559 [2024-05-15 12:30:01.021208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.559 [2024-05-15 12:30:01.021221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.559 [2024-05-15 12:30:01.021234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.559 [2024-05-15 12:30:01.023812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.559 [2024-05-15 12:30:01.032107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.559 [2024-05-15 12:30:01.032533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.033025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.033045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.559 [2024-05-15 12:30:01.033065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.559 [2024-05-15 12:30:01.033328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.559 [2024-05-15 12:30:01.033578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.559 [2024-05-15 12:30:01.033595] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.559 [2024-05-15 12:30:01.033613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.559 [2024-05-15 12:30:01.037405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.559 [2024-05-15 12:30:01.045249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.559 [2024-05-15 12:30:01.045880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.046381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.046432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.559 [2024-05-15 12:30:01.046478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.559 [2024-05-15 12:30:01.047063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.559 [2024-05-15 12:30:01.047247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.559 [2024-05-15 12:30:01.047260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.559 [2024-05-15 12:30:01.047272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.559 [2024-05-15 12:30:01.049848] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.559 [2024-05-15 12:30:01.058017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.559 [2024-05-15 12:30:01.058617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.059071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.059114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.559 [2024-05-15 12:30:01.059128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.559 [2024-05-15 12:30:01.059314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.559 [2024-05-15 12:30:01.059492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.559 [2024-05-15 12:30:01.059504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.559 [2024-05-15 12:30:01.059515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.559 [2024-05-15 12:30:01.062011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.559 [2024-05-15 12:30:01.070880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.559 [2024-05-15 12:30:01.071490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.071981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.072029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.559 [2024-05-15 12:30:01.072058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.559 [2024-05-15 12:30:01.072235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.559 [2024-05-15 12:30:01.072400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.559 [2024-05-15 12:30:01.072412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.559 [2024-05-15 12:30:01.072424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.559 [2024-05-15 12:30:01.074975] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.559 [2024-05-15 12:30:01.083814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.559 [2024-05-15 12:30:01.084397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.084853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.559 [2024-05-15 12:30:01.084868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.559 [2024-05-15 12:30:01.084882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.559 [2024-05-15 12:30:01.085068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.819 [2024-05-15 12:30:01.085253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.819 [2024-05-15 12:30:01.085266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.819 [2024-05-15 12:30:01.085280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.087993] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.096720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.097269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.097744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.097793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.097850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.098102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.098278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.820 [2024-05-15 12:30:01.098290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.820 [2024-05-15 12:30:01.098302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.100939] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.109687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.110283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.110792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.110842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.110890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.111156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.111351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.820 [2024-05-15 12:30:01.111364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.820 [2024-05-15 12:30:01.111376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.113936] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.122583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.123020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.123249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.123265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.123278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.123452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.123617] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.820 [2024-05-15 12:30:01.123628] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.820 [2024-05-15 12:30:01.123641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.126227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.135394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.136011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.137116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.137143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.137157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.137384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.137559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.820 [2024-05-15 12:30:01.137572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.820 [2024-05-15 12:30:01.137585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.140265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.148398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.149040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.149368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.149384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.149399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.149586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.149764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.820 [2024-05-15 12:30:01.149776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.820 [2024-05-15 12:30:01.149790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.152508] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.161426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.162025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.162503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.162557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.162606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.163142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.163328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.820 [2024-05-15 12:30:01.163341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.820 [2024-05-15 12:30:01.163354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.166078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.174377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.174906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.175755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.175782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.175797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.175992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.176175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.820 [2024-05-15 12:30:01.176187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.820 [2024-05-15 12:30:01.176205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.178917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.187375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.187902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.188353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.188395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.188410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.188597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.188776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.820 [2024-05-15 12:30:01.188789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.820 [2024-05-15 12:30:01.188802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.820 [2024-05-15 12:30:01.191518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.820 [2024-05-15 12:30:01.200300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.820 [2024-05-15 12:30:01.200853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.201234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.820 [2024-05-15 12:30:01.201250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.820 [2024-05-15 12:30:01.201265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.820 [2024-05-15 12:30:01.201450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.820 [2024-05-15 12:30:01.201628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.201641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.201654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.204389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.213323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.213869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.214234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.214250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.821 [2024-05-15 12:30:01.214264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.821 [2024-05-15 12:30:01.214450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.821 [2024-05-15 12:30:01.214630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.214647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.214660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.217381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.226308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.226944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.227394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.227410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.821 [2024-05-15 12:30:01.227423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.821 [2024-05-15 12:30:01.227608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.821 [2024-05-15 12:30:01.227787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.227799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.227812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.230524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.239297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.239935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.240350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.240368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.821 [2024-05-15 12:30:01.240382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.821 [2024-05-15 12:30:01.240580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.821 [2024-05-15 12:30:01.240759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.240772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.240785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.243503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.252276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.252927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.253305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.253323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.821 [2024-05-15 12:30:01.253337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.821 [2024-05-15 12:30:01.253525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.821 [2024-05-15 12:30:01.253703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.253715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.253732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.256450] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.265221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.265785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.266118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.266133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.821 [2024-05-15 12:30:01.266148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.821 [2024-05-15 12:30:01.266340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.821 [2024-05-15 12:30:01.266521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.266534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.266547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.269259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.278187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.278773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.279196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.279212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.821 [2024-05-15 12:30:01.279227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.821 [2024-05-15 12:30:01.279413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.821 [2024-05-15 12:30:01.279591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.279603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.279616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.282330] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.291093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.291755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.292207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.292223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.821 [2024-05-15 12:30:01.292237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.821 [2024-05-15 12:30:01.292424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.821 [2024-05-15 12:30:01.292603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.292615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.292628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.295348] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.304108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.304774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.305154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.305169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.821 [2024-05-15 12:30:01.305183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.821 [2024-05-15 12:30:01.305375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.821 [2024-05-15 12:30:01.305554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.821 [2024-05-15 12:30:01.305566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.821 [2024-05-15 12:30:01.305579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.821 [2024-05-15 12:30:01.308294] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.821 [2024-05-15 12:30:01.317046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.821 [2024-05-15 12:30:01.317714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.821 [2024-05-15 12:30:01.318117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-05-15 12:30:01.318133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.822 [2024-05-15 12:30:01.318147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.822 [2024-05-15 12:30:01.318339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.822 [2024-05-15 12:30:01.318519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.822 [2024-05-15 12:30:01.318533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.822 [2024-05-15 12:30:01.318546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.822 [2024-05-15 12:30:01.321266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.822 [2024-05-15 12:30:01.330034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.822 [2024-05-15 12:30:01.330392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-05-15 12:30:01.330826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-05-15 12:30:01.330843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.822 [2024-05-15 12:30:01.330857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.822 [2024-05-15 12:30:01.331042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.822 [2024-05-15 12:30:01.331229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.822 [2024-05-15 12:30:01.331241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.822 [2024-05-15 12:30:01.331255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.822 [2024-05-15 12:30:01.333968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:32.822 [2024-05-15 12:30:01.343060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:32.822 [2024-05-15 12:30:01.343725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-05-15 12:30:01.344154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.822 [2024-05-15 12:30:01.344169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:32.822 [2024-05-15 12:30:01.344184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:32.822 [2024-05-15 12:30:01.344373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:32.822 [2024-05-15 12:30:01.344552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:32.822 [2024-05-15 12:30:01.344564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:32.822 [2024-05-15 12:30:01.344577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:32.822 [2024-05-15 12:30:01.347300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.082 [2024-05-15 12:30:01.356069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.082 [2024-05-15 12:30:01.356654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.082 [2024-05-15 12:30:01.356904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.082 [2024-05-15 12:30:01.356919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.082 [2024-05-15 12:30:01.356933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.082 [2024-05-15 12:30:01.357118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.082 [2024-05-15 12:30:01.357303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.082 [2024-05-15 12:30:01.357315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.082 [2024-05-15 12:30:01.357328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.082 [2024-05-15 12:30:01.360041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.082 [2024-05-15 12:30:01.368975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.082 [2024-05-15 12:30:01.369556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.082 [2024-05-15 12:30:01.370009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.082 [2024-05-15 12:30:01.370024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.082 [2024-05-15 12:30:01.370038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.082 [2024-05-15 12:30:01.370232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.082 [2024-05-15 12:30:01.370411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.082 [2024-05-15 12:30:01.370423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.370437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.373153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.381925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.382579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.382794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.382809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.382824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.383009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.383189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.383206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.383219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.385931] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.394868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.395522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.395974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.395989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.396003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.396188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.396373] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.396385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.396398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.399110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.407877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.408513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.408964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.408979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.408995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.409179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.409361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.409374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.409387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.412096] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.420851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.421409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.421886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.421904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.421918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.422104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.422287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.422299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.422313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.425025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.433794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.434446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.434817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.434832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.434847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.435031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.435214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.435227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.435240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.437951] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.446727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.447351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.447683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.447698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.447713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.447898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.448078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.448090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.448104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.450828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.459745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.460312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.460704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.460719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.460736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.460923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.461102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.461114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.461127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.463841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.472774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.473384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.473838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.473853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.473867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.474051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.474235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.474248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.474260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.476976] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.485747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.486358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.486715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.486730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.083 [2024-05-15 12:30:01.486745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.083 [2024-05-15 12:30:01.486930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.083 [2024-05-15 12:30:01.487109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.083 [2024-05-15 12:30:01.487121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.083 [2024-05-15 12:30:01.487134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.083 [2024-05-15 12:30:01.489854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.083 [2024-05-15 12:30:01.498783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.083 [2024-05-15 12:30:01.499432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.499882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.083 [2024-05-15 12:30:01.499898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.499912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.500101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.500284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.500297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.500310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.503013] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.084 [2024-05-15 12:30:01.511781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.084 [2024-05-15 12:30:01.512239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.512550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.512565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.512579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.512765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.512944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.512956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.512969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.515682] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.084 [2024-05-15 12:30:01.524768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.084 [2024-05-15 12:30:01.525443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.525889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.525904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.525918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.526104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.526287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.526300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.526312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.529019] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.084 [2024-05-15 12:30:01.537792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.084 [2024-05-15 12:30:01.538440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.538895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.538909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.538924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.539111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.539299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.539312] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.539325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.542033] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.084 [2024-05-15 12:30:01.550803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.084 [2024-05-15 12:30:01.551460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.551889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.551905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.551919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.552104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.552290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.552303] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.552316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.555026] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.084 [2024-05-15 12:30:01.563794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.084 [2024-05-15 12:30:01.564423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.564793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.564808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.564823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.565009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.565188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.565206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.565220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.567931] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.084 [2024-05-15 12:30:01.576862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.084 [2024-05-15 12:30:01.577449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.577640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.577655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.577670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.577855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.578033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.578049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.578062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.580782] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.084 [2024-05-15 12:30:01.589872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.084 [2024-05-15 12:30:01.590460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.590913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.590928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.590942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.591129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.591314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.591326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.591339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.594049] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.084 [2024-05-15 12:30:01.602826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.084 [2024-05-15 12:30:01.603462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.603835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-05-15 12:30:01.603850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.084 [2024-05-15 12:30:01.603864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.084 [2024-05-15 12:30:01.604049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.084 [2024-05-15 12:30:01.604232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.084 [2024-05-15 12:30:01.604244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.084 [2024-05-15 12:30:01.604258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.084 [2024-05-15 12:30:01.606991] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.615768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.616403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.616610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.616625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.616639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.616825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.617003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.346 [2024-05-15 12:30:01.617016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.346 [2024-05-15 12:30:01.617034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.346 [2024-05-15 12:30:01.619750] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.628679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.629245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.629629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.629644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.629659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.629843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.630022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.346 [2024-05-15 12:30:01.630034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.346 [2024-05-15 12:30:01.630048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.346 [2024-05-15 12:30:01.632768] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.641691] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.642266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.642486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.642501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.642516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.642702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.642880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.346 [2024-05-15 12:30:01.642893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.346 [2024-05-15 12:30:01.642906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.346 [2024-05-15 12:30:01.645620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.654713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.655283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.655735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.655751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.655765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.655950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.656128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.346 [2024-05-15 12:30:01.656140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.346 [2024-05-15 12:30:01.656154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.346 [2024-05-15 12:30:01.658880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.667647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.668075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.668529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.668545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.668559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.668743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.668922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.346 [2024-05-15 12:30:01.668934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.346 [2024-05-15 12:30:01.668947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.346 [2024-05-15 12:30:01.671705] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.680648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.681228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.681627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.681643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.681657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.681841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.682020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.346 [2024-05-15 12:30:01.682032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.346 [2024-05-15 12:30:01.682045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.346 [2024-05-15 12:30:01.684759] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.693680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.694325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.694779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.694794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.694809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.694995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.695173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.346 [2024-05-15 12:30:01.695185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.346 [2024-05-15 12:30:01.695203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.346 [2024-05-15 12:30:01.697912] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.706676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.707341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.707753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.707802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.707851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.708384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.708562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.346 [2024-05-15 12:30:01.708575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.346 [2024-05-15 12:30:01.708588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.346 [2024-05-15 12:30:01.711297] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.346 [2024-05-15 12:30:01.719734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.346 [2024-05-15 12:30:01.720303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.720721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.346 [2024-05-15 12:30:01.720770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.346 [2024-05-15 12:30:01.720818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.346 [2024-05-15 12:30:01.721340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.346 [2024-05-15 12:30:01.721590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.721607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.721625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.725424] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.733195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.733848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.734374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.734424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.734472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.734659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.734825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.734837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.734849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.737478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.746051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.746707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.746931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.746980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.747028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.747684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.747886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.747898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.747910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.750447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.758875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.759543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.759997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.760046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.760094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.760611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.760777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.760788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.760800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.763334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.771682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.772344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.772886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.772936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.772984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.773557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.773722] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.773734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.773745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.776268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.784515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.785099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.785619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.785677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.785727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.786382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.786894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.786905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.786917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.789501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.797377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.797964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.798415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.798470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.798518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.798753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.798917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.798929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.798941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.801529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.810147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.810739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.811184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.811245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.811293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.811735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.811915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.811928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.811940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.814525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.823102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.823509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.823765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.823814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.823871] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.824523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.825003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.825015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.825028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.827555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.835817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.836447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.836952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.347 [2024-05-15 12:30:01.837002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.347 [2024-05-15 12:30:01.837052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.347 [2024-05-15 12:30:01.837668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.347 [2024-05-15 12:30:01.837843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.347 [2024-05-15 12:30:01.837855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.347 [2024-05-15 12:30:01.837868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.347 [2024-05-15 12:30:01.840423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.347 [2024-05-15 12:30:01.848740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.347 [2024-05-15 12:30:01.849399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.348 [2024-05-15 12:30:01.849832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.348 [2024-05-15 12:30:01.849847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.348 [2024-05-15 12:30:01.849861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.348 [2024-05-15 12:30:01.850047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.348 [2024-05-15 12:30:01.850230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.348 [2024-05-15 12:30:01.850243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.348 [2024-05-15 12:30:01.850256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.348 [2024-05-15 12:30:01.852941] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.348 [2024-05-15 12:30:01.861517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.348 [2024-05-15 12:30:01.862185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.348 [2024-05-15 12:30:01.862498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.348 [2024-05-15 12:30:01.862548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.348 [2024-05-15 12:30:01.862597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.348 [2024-05-15 12:30:01.863089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.348 [2024-05-15 12:30:01.863283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.348 [2024-05-15 12:30:01.863300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.348 [2024-05-15 12:30:01.863318] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.348 [2024-05-15 12:30:01.867118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.609 [2024-05-15 12:30:01.874929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.609 [2024-05-15 12:30:01.875603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.875998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.876047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.609 [2024-05-15 12:30:01.876095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.609 [2024-05-15 12:30:01.876748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.609 [2024-05-15 12:30:01.876982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.609 [2024-05-15 12:30:01.876994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.609 [2024-05-15 12:30:01.877006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.609 [2024-05-15 12:30:01.879712] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.609 [2024-05-15 12:30:01.887886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.609 [2024-05-15 12:30:01.888546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.889058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.889107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.609 [2024-05-15 12:30:01.889156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.609 [2024-05-15 12:30:01.889810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.609 [2024-05-15 12:30:01.890187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.609 [2024-05-15 12:30:01.890204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.609 [2024-05-15 12:30:01.890216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.609 [2024-05-15 12:30:01.892725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.609 [2024-05-15 12:30:01.900678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.609 [2024-05-15 12:30:01.901259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.901786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.901835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.609 [2024-05-15 12:30:01.901883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.609 [2024-05-15 12:30:01.902361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.609 [2024-05-15 12:30:01.902539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.609 [2024-05-15 12:30:01.902551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.609 [2024-05-15 12:30:01.902579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.609 [2024-05-15 12:30:01.905295] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.609 [2024-05-15 12:30:01.913589] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.609 [2024-05-15 12:30:01.914242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.914693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.914708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.609 [2024-05-15 12:30:01.914723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.609 [2024-05-15 12:30:01.914908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.609 [2024-05-15 12:30:01.915088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.609 [2024-05-15 12:30:01.915100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.609 [2024-05-15 12:30:01.915113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.609 [2024-05-15 12:30:01.917828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.609 [2024-05-15 12:30:01.926597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.609 [2024-05-15 12:30:01.927184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.927651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.927667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.609 [2024-05-15 12:30:01.927682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.609 [2024-05-15 12:30:01.927866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.609 [2024-05-15 12:30:01.928045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.609 [2024-05-15 12:30:01.928058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.609 [2024-05-15 12:30:01.928070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.609 [2024-05-15 12:30:01.930780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.609 [2024-05-15 12:30:01.939542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.609 [2024-05-15 12:30:01.940196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.940625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-05-15 12:30:01.940640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:01.940654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:01.940834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:01.941007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:01.941023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:01.941035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:01.943678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:01.952515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:01.953155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:01.953691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:01.953741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:01.953789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:01.954140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:01.954338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:01.954350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:01.954363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:01.958022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:01.966079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:01.966738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:01.967320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:01.967371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:01.967418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:01.968028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:01.968207] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:01.968220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:01.968232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:01.970741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:01.978850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:01.979499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:01.980017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:01.980066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:01.980114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:01.980492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:01.980681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:01.980693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:01.980712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:01.983354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:01.991817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:01.992454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:01.992960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:01.993009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:01.993057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:01.993315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:01.993489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:01.993501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:01.993514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:01.996147] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:02.004734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:02.005394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.005844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.005860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:02.005875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:02.006061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:02.006245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:02.006258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:02.006271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:02.008979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:02.017740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:02.018373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.018600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.018616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:02.018630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:02.018813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:02.018992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:02.019004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:02.019017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:02.021736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:02.030741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:02.031313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.031768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.031782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:02.031795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:02.031965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:02.032130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:02.032141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:02.032153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:02.034807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:02.043590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:02.044226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.044751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.044799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:02.044848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:02.045165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:02.045419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:02.045436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:02.045454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:02.049261] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.610 [2024-05-15 12:30:02.057234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.610 [2024-05-15 12:30:02.057874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.058329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-05-15 12:30:02.058381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.610 [2024-05-15 12:30:02.058428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.610 [2024-05-15 12:30:02.058821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.610 [2024-05-15 12:30:02.058995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.610 [2024-05-15 12:30:02.059007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.610 [2024-05-15 12:30:02.059019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.610 [2024-05-15 12:30:02.061697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.611 [2024-05-15 12:30:02.070174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.611 [2024-05-15 12:30:02.070837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.071287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.071303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.611 [2024-05-15 12:30:02.071316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.611 [2024-05-15 12:30:02.071509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.611 [2024-05-15 12:30:02.071682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.611 [2024-05-15 12:30:02.071694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.611 [2024-05-15 12:30:02.071707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.611 [2024-05-15 12:30:02.074426] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.611 [2024-05-15 12:30:02.083138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.611 [2024-05-15 12:30:02.083798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.084315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.084366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.611 [2024-05-15 12:30:02.084413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.611 [2024-05-15 12:30:02.084822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.611 [2024-05-15 12:30:02.085000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.611 [2024-05-15 12:30:02.085012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.611 [2024-05-15 12:30:02.085025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.611 [2024-05-15 12:30:02.087711] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.611 [2024-05-15 12:30:02.095972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.611 [2024-05-15 12:30:02.096621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.097001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.097016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.611 [2024-05-15 12:30:02.097030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.611 [2024-05-15 12:30:02.097216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.611 [2024-05-15 12:30:02.097411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.611 [2024-05-15 12:30:02.097423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.611 [2024-05-15 12:30:02.097436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.611 [2024-05-15 12:30:02.100145] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.611 [2024-05-15 12:30:02.108880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.611 [2024-05-15 12:30:02.109480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.109968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.110018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.611 [2024-05-15 12:30:02.110066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.611 [2024-05-15 12:30:02.110655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.611 [2024-05-15 12:30:02.110830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.611 [2024-05-15 12:30:02.110842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.611 [2024-05-15 12:30:02.110855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.611 [2024-05-15 12:30:02.113501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.611 [2024-05-15 12:30:02.121719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.611 [2024-05-15 12:30:02.122375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.122883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.122932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.611 [2024-05-15 12:30:02.122980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.611 [2024-05-15 12:30:02.123587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.611 [2024-05-15 12:30:02.123762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.611 [2024-05-15 12:30:02.123774] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.611 [2024-05-15 12:30:02.123786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.611 [2024-05-15 12:30:02.126331] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.611 [2024-05-15 12:30:02.134641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.611 [2024-05-15 12:30:02.135307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.135815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-05-15 12:30:02.135866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.611 [2024-05-15 12:30:02.135914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.611 [2024-05-15 12:30:02.136570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.871 [2024-05-15 12:30:02.136880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.871 [2024-05-15 12:30:02.136896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.871 [2024-05-15 12:30:02.136915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.871 [2024-05-15 12:30:02.140712] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.871 [2024-05-15 12:30:02.147989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.871 [2024-05-15 12:30:02.148664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.871 [2024-05-15 12:30:02.149111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.149169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.149234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.149754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.149929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.149940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.149953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.152481] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.160725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.161375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.161884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.161933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.161982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.162643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.163055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.163067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.163080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.165603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.173500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.174108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.174681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.174732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.174779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.175113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.175292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.175304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.175316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.177829] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.186214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.186857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.187335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.187386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.187442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.187750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.188001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.188017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.188035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.191842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.199454] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.200078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.200545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.200584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.200598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.200775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.200944] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.200956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.200970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.203572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.212227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.212811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.213338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.213388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.213435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.213902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.214066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.214077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.214089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.216626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.225130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.225762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.226303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.226354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.226401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.226796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.226960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.226971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.226983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.229570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.237951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.238589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.239108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.239157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.239222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.239849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.240022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.240034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.240047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.242650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.250665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.251315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.251822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.251870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.251919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.252575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.253004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.253016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.253029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.872 [2024-05-15 12:30:02.255555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.872 [2024-05-15 12:30:02.263415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.872 [2024-05-15 12:30:02.264041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.264516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.872 [2024-05-15 12:30:02.264568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.872 [2024-05-15 12:30:02.264616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.872 [2024-05-15 12:30:02.265089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.872 [2024-05-15 12:30:02.265280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.872 [2024-05-15 12:30:02.265293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.872 [2024-05-15 12:30:02.265305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 [2024-05-15 12:30:02.267814] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 [2024-05-15 12:30:02.276212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.873 [2024-05-15 12:30:02.276785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.277300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.277316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.873 [2024-05-15 12:30:02.277330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.873 [2024-05-15 12:30:02.277514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.873 [2024-05-15 12:30:02.277688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.873 [2024-05-15 12:30:02.277699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.873 [2024-05-15 12:30:02.277712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 [2024-05-15 12:30:02.280262] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 [2024-05-15 12:30:02.288936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.873 [2024-05-15 12:30:02.289573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.290029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.290085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.873 [2024-05-15 12:30:02.290098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.873 [2024-05-15 12:30:02.290293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.873 [2024-05-15 12:30:02.290470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.873 [2024-05-15 12:30:02.290482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.873 [2024-05-15 12:30:02.290494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 [2024-05-15 12:30:02.293044] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 [2024-05-15 12:30:02.301701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.873 [2024-05-15 12:30:02.302343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.302865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.302913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.873 [2024-05-15 12:30:02.302962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.873 [2024-05-15 12:30:02.303145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.873 [2024-05-15 12:30:02.303337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.873 [2024-05-15 12:30:02.303353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.873 [2024-05-15 12:30:02.303366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 [2024-05-15 12:30:02.305951] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2292386 Killed "${NVMF_APP[@]}" "$@" 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.873 [2024-05-15 12:30:02.314749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.873 [2024-05-15 12:30:02.315381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.315834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.315849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.873 [2024-05-15 12:30:02.315864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.873 [2024-05-15 12:30:02.316047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.873 [2024-05-15 12:30:02.316232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.873 [2024-05-15 12:30:02.316245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.873 [2024-05-15 12:30:02.316257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2293933 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2293933 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 2293933 ']' 00:28:33.873 [2024-05-15 12:30:02.318968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:33.873 12:30:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.873 [2024-05-15 12:30:02.327781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.873 [2024-05-15 12:30:02.328440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.328823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.328839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.873 [2024-05-15 12:30:02.328855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.873 [2024-05-15 12:30:02.329042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.873 [2024-05-15 12:30:02.329230] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.873 [2024-05-15 12:30:02.329243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.873 [2024-05-15 12:30:02.329259] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 [2024-05-15 12:30:02.331965] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 [2024-05-15 12:30:02.340729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.873 [2024-05-15 12:30:02.341385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.341840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.341855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.873 [2024-05-15 12:30:02.341869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.873 [2024-05-15 12:30:02.342049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.873 [2024-05-15 12:30:02.342246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.873 [2024-05-15 12:30:02.342259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.873 [2024-05-15 12:30:02.342272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 [2024-05-15 12:30:02.344987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 [2024-05-15 12:30:02.353750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.873 [2024-05-15 12:30:02.354336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.354713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.354728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.873 [2024-05-15 12:30:02.354742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.873 [2024-05-15 12:30:02.354927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.873 [2024-05-15 12:30:02.355105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.873 [2024-05-15 12:30:02.355117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.873 [2024-05-15 12:30:02.355130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 [2024-05-15 12:30:02.357850] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 [2024-05-15 12:30:02.366773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.873 [2024-05-15 12:30:02.367200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.367623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.873 [2024-05-15 12:30:02.367638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.873 [2024-05-15 12:30:02.367653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.873 [2024-05-15 12:30:02.367838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.873 [2024-05-15 12:30:02.368017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.873 [2024-05-15 12:30:02.368033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.873 [2024-05-15 12:30:02.368046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.873 [2024-05-15 12:30:02.370760] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.873 [2024-05-15 12:30:02.371312] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:33.874 [2024-05-15 12:30:02.371364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.874 [2024-05-15 12:30:02.379775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.874 [2024-05-15 12:30:02.380389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 12:30:02.380846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 12:30:02.380863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.874 [2024-05-15 12:30:02.380877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.874 [2024-05-15 12:30:02.381058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.874 [2024-05-15 12:30:02.381255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.874 [2024-05-15 12:30:02.381268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.874 [2024-05-15 12:30:02.381282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.874 [2024-05-15 12:30:02.383960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.874 [2024-05-15 12:30:02.392645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.874 [2024-05-15 12:30:02.393293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 12:30:02.393740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.874 [2024-05-15 12:30:02.393756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:33.874 [2024-05-15 12:30:02.393770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:33.874 [2024-05-15 12:30:02.393952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:33.874 [2024-05-15 12:30:02.394126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.874 [2024-05-15 12:30:02.394138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.874 [2024-05-15 12:30:02.394151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.874 [2024-05-15 12:30:02.396865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.133 [2024-05-15 12:30:02.405619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.133 [2024-05-15 12:30:02.406281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.133 [2024-05-15 12:30:02.406730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.133 [2024-05-15 12:30:02.406745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.133 [2024-05-15 12:30:02.406759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.133 [2024-05-15 12:30:02.406944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.133 [2024-05-15 12:30:02.407136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.133 [2024-05-15 12:30:02.407148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.133 [2024-05-15 12:30:02.407161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.133 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.133 [2024-05-15 12:30:02.409853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.133 [2024-05-15 12:30:02.418498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.133 [2024-05-15 12:30:02.419155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.133 [2024-05-15 12:30:02.419603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.133 [2024-05-15 12:30:02.419618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.133 [2024-05-15 12:30:02.419632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.133 [2024-05-15 12:30:02.419816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.419994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.420006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.420019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.422678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.431490] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.432150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.432603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.432619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.134 [2024-05-15 12:30:02.432633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.134 [2024-05-15 12:30:02.432821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.433000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.433012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.433025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.435680] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.444522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.445169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.445611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.445626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.134 [2024-05-15 12:30:02.445640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.134 [2024-05-15 12:30:02.445822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.445995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.446010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.446023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.447399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:34.134 [2024-05-15 12:30:02.448752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.457600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.458284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.458678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.458693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.134 [2024-05-15 12:30:02.458708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.134 [2024-05-15 12:30:02.458891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.459066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.459079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.459092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.461793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.470587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.471220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.471672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.471687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.134 [2024-05-15 12:30:02.471700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.134 [2024-05-15 12:30:02.471881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.472055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.472067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.472080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.474776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.483596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.484173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.484617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.484632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.134 [2024-05-15 12:30:02.484646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.134 [2024-05-15 12:30:02.484826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.484999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.485015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.485029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.487786] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.496549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.497229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.497658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.497674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.134 [2024-05-15 12:30:02.497690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.134 [2024-05-15 12:30:02.497877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.498056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.498068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.498082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.500797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.509538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.510198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.510633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.510649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.134 [2024-05-15 12:30:02.510663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.134 [2024-05-15 12:30:02.510845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.511019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.511031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.511044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.513747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.521675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.134 [2024-05-15 12:30:02.521703] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.134 [2024-05-15 12:30:02.521712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.134 [2024-05-15 12:30:02.521721] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.134 [2024-05-15 12:30:02.521728] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.134 [2024-05-15 12:30:02.521772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.134 [2024-05-15 12:30:02.521854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.134 [2024-05-15 12:30:02.521856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.134 [2024-05-15 12:30:02.522679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.523356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.523812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.523827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.134 [2024-05-15 12:30:02.523843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.134 [2024-05-15 12:30:02.524032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.134 [2024-05-15 12:30:02.524217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.134 [2024-05-15 12:30:02.524229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.134 [2024-05-15 12:30:02.524244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.134 [2024-05-15 12:30:02.526965] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.134 [2024-05-15 12:30:02.535736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.134 [2024-05-15 12:30:02.536397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.134 [2024-05-15 12:30:02.536776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.536791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.536806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.536993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.537173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.537186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.537206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.539927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.548706] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.549377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.549830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.549846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.549861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.550048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.550247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.550261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.550274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.552988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.561754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.562414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.562868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.562893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.562907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.563095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.563279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.563292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.563305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.566019] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.574792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.575458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.575764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.575780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.575794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.575977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.576153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.576165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.576178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.578904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.587824] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.588481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.588877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.588892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.588906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.589091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.589275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.589288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.589316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.592031] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.600790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.601437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.601812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.601827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.601846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.602031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.602216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.602229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.602242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.604948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.613715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.614350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.614729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.614745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.614759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.614944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.615124] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.615136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.615149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.617867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.626630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.627267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.627714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.627729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.627743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.627928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.628108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.628120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.628133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.630853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.639608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.640260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.640636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.640651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.640665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.640853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.641032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.641044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.135 [2024-05-15 12:30:02.641057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.135 [2024-05-15 12:30:02.643771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.135 [2024-05-15 12:30:02.652533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.135 [2024-05-15 12:30:02.653181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.653560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.135 [2024-05-15 12:30:02.653576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.135 [2024-05-15 12:30:02.653590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.135 [2024-05-15 12:30:02.653775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.135 [2024-05-15 12:30:02.653953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.135 [2024-05-15 12:30:02.653966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.136 [2024-05-15 12:30:02.653979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.136 [2024-05-15 12:30:02.656691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.395 [2024-05-15 12:30:02.665451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.395 [2024-05-15 12:30:02.666092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.395 [2024-05-15 12:30:02.666464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.395 [2024-05-15 12:30:02.666481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.395 [2024-05-15 12:30:02.666495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.395 [2024-05-15 12:30:02.666681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.395 [2024-05-15 12:30:02.666859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.395 [2024-05-15 12:30:02.666871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.395 [2024-05-15 12:30:02.666884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.395 [2024-05-15 12:30:02.669601] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.395 [2024-05-15 12:30:02.678356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.395 [2024-05-15 12:30:02.679011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.395 [2024-05-15 12:30:02.679388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.395 [2024-05-15 12:30:02.679404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.395 [2024-05-15 12:30:02.679418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.395 [2024-05-15 12:30:02.679603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.395 [2024-05-15 12:30:02.679787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.395 [2024-05-15 12:30:02.679799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.395 [2024-05-15 12:30:02.679812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.395 [2024-05-15 12:30:02.682525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.395 [2024-05-15 12:30:02.691282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.395 [2024-05-15 12:30:02.691934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.395 [2024-05-15 12:30:02.692362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.395 [2024-05-15 12:30:02.692378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.395 [2024-05-15 12:30:02.692391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.395 [2024-05-15 12:30:02.692575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.395 [2024-05-15 12:30:02.692754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.395 [2024-05-15 12:30:02.692766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.395 [2024-05-15 12:30:02.692779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.695699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.704306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.704867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.705250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.705269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.705283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.705471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.705650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.705663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.705678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.708396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.717324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.717975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.718350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.718367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.718381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.718568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.718747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.718763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.718776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.721496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.730257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.730904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.731305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.731321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.731335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.731520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.731700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.731713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.731726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.734443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.743204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.743791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.744238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.744254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.744269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.744453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.744632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.744644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.744657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.747371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.756142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.756737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.757163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.757178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.757256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.757444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.757623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.757635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.757653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.760362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.769119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.769710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.770210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.770227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.770240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.770424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.770603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.770615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.770628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.773344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.782117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.782773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.783251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.783269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.783283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.783481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.783660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.783672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.783685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.786395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.795151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.795804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.796253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.796269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.796283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.796470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.796648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.796660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.796673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.799393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.808153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.808832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.809169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.809184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.396 [2024-05-15 12:30:02.809203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.396 [2024-05-15 12:30:02.809388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.396 [2024-05-15 12:30:02.809568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.396 [2024-05-15 12:30:02.809580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.396 [2024-05-15 12:30:02.809593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.396 [2024-05-15 12:30:02.812304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.396 [2024-05-15 12:30:02.821065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.396 [2024-05-15 12:30:02.821682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.822122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.396 [2024-05-15 12:30:02.822138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.397 [2024-05-15 12:30:02.822152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.397 [2024-05-15 12:30:02.822343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.397 [2024-05-15 12:30:02.822522] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.397 [2024-05-15 12:30:02.822534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.397 [2024-05-15 12:30:02.822547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.397 [2024-05-15 12:30:02.825264] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.397 [2024-05-15 12:30:02.834027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.397 [2024-05-15 12:30:02.834591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.835046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.835061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.397 [2024-05-15 12:30:02.835075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.397 [2024-05-15 12:30:02.835266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.397 [2024-05-15 12:30:02.835445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.397 [2024-05-15 12:30:02.835458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.397 [2024-05-15 12:30:02.835471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.397 [2024-05-15 12:30:02.838179] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.397 [2024-05-15 12:30:02.846949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.397 [2024-05-15 12:30:02.847527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.847904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.847920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.397 [2024-05-15 12:30:02.847933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.397 [2024-05-15 12:30:02.848112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.397 [2024-05-15 12:30:02.848309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.397 [2024-05-15 12:30:02.848321] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.397 [2024-05-15 12:30:02.848334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.397 [2024-05-15 12:30:02.851052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.397 [2024-05-15 12:30:02.859969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.397 [2024-05-15 12:30:02.860610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.860937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.860952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.397 [2024-05-15 12:30:02.860966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.397 [2024-05-15 12:30:02.861152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.397 [2024-05-15 12:30:02.861336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.397 [2024-05-15 12:30:02.861349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.397 [2024-05-15 12:30:02.861363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.397 [2024-05-15 12:30:02.864077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.397 [2024-05-15 12:30:02.873009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.397 [2024-05-15 12:30:02.873536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.873908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.873923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.397 [2024-05-15 12:30:02.873937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.397 [2024-05-15 12:30:02.874122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.397 [2024-05-15 12:30:02.874305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.397 [2024-05-15 12:30:02.874318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.397 [2024-05-15 12:30:02.874331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.397 [2024-05-15 12:30:02.877043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.397 [2024-05-15 12:30:02.885967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.397 [2024-05-15 12:30:02.886498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.886867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.886882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.397 [2024-05-15 12:30:02.886896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.397 [2024-05-15 12:30:02.887081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.397 [2024-05-15 12:30:02.887266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.397 [2024-05-15 12:30:02.887278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.397 [2024-05-15 12:30:02.887291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.397 [2024-05-15 12:30:02.889999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.397 [2024-05-15 12:30:02.898920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.397 [2024-05-15 12:30:02.899576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.899958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.899974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.397 [2024-05-15 12:30:02.899988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.397 [2024-05-15 12:30:02.900172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.397 [2024-05-15 12:30:02.900355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.397 [2024-05-15 12:30:02.900368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.397 [2024-05-15 12:30:02.900381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.397 [2024-05-15 12:30:02.903089] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.397 [2024-05-15 12:30:02.911854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.397 [2024-05-15 12:30:02.912463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.912837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.397 [2024-05-15 12:30:02.912853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.397 [2024-05-15 12:30:02.912867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.397 [2024-05-15 12:30:02.913051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.397 [2024-05-15 12:30:02.913235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.397 [2024-05-15 12:30:02.913247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.397 [2024-05-15 12:30:02.913260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.397 [2024-05-15 12:30:02.915971] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.664 [2024-05-15 12:30:02.924886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.664 [2024-05-15 12:30:02.925505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.664 [2024-05-15 12:30:02.925823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.664 [2024-05-15 12:30:02.925842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.664 [2024-05-15 12:30:02.925856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.664 [2024-05-15 12:30:02.926040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.664 [2024-05-15 12:30:02.926224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.664 [2024-05-15 12:30:02.926237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.664 [2024-05-15 12:30:02.926250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.664 [2024-05-15 12:30:02.928956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.664 [2024-05-15 12:30:02.937878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.664 [2024-05-15 12:30:02.938538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.664 [2024-05-15 12:30:02.938866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.664 [2024-05-15 12:30:02.938881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.664 [2024-05-15 12:30:02.938896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.664 [2024-05-15 12:30:02.939082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.664 [2024-05-15 12:30:02.939265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.664 [2024-05-15 12:30:02.939278] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.664 [2024-05-15 12:30:02.939291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.664 [2024-05-15 12:30:02.942001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.664 [2024-05-15 12:30:02.950936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.665 [2024-05-15 12:30:02.951553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:02.951926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:02.951942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.665 [2024-05-15 12:30:02.951956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.665 [2024-05-15 12:30:02.952140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.665 [2024-05-15 12:30:02.952322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.665 [2024-05-15 12:30:02.952335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.665 [2024-05-15 12:30:02.952347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.665 [2024-05-15 12:30:02.955059] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.665 [2024-05-15 12:30:02.963984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.665 [2024-05-15 12:30:02.964612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:02.964996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:02.965011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.665 [2024-05-15 12:30:02.965028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.665 [2024-05-15 12:30:02.965219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.665 [2024-05-15 12:30:02.965399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.665 [2024-05-15 12:30:02.965412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.665 [2024-05-15 12:30:02.965425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.665 [2024-05-15 12:30:02.968133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.665 [2024-05-15 12:30:02.976897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.665 [2024-05-15 12:30:02.977504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:02.977951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:02.977967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.665 [2024-05-15 12:30:02.977981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.665 [2024-05-15 12:30:02.978165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.665 [2024-05-15 12:30:02.978348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.665 [2024-05-15 12:30:02.978361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.665 [2024-05-15 12:30:02.978374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.665 [2024-05-15 12:30:02.981089] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.665 [2024-05-15 12:30:02.989849] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.665 [2024-05-15 12:30:02.990438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:02.990751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:02.990766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.665 [2024-05-15 12:30:02.990781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.665 [2024-05-15 12:30:02.990966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.665 [2024-05-15 12:30:02.991146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.665 [2024-05-15 12:30:02.991158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.665 [2024-05-15 12:30:02.991171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.665 [2024-05-15 12:30:02.993884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.665 [2024-05-15 12:30:03.002805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.665 [2024-05-15 12:30:03.003445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:03.003873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:03.003888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.665 [2024-05-15 12:30:03.003902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.665 [2024-05-15 12:30:03.004092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.665 [2024-05-15 12:30:03.004277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.665 [2024-05-15 12:30:03.004290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.665 [2024-05-15 12:30:03.004302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.665 [2024-05-15 12:30:03.007012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.665 [2024-05-15 12:30:03.015772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.665 [2024-05-15 12:30:03.016360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:03.016699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:03.016715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.665 [2024-05-15 12:30:03.016729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.665 [2024-05-15 12:30:03.016916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.665 [2024-05-15 12:30:03.017094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.665 [2024-05-15 12:30:03.017107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.665 [2024-05-15 12:30:03.017119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.665 [2024-05-15 12:30:03.019841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.665 [2024-05-15 12:30:03.028763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.665 [2024-05-15 12:30:03.029363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:03.029742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.665 [2024-05-15 12:30:03.029756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.665 [2024-05-15 12:30:03.029771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.665 [2024-05-15 12:30:03.029950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.666 [2024-05-15 12:30:03.030123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.666 [2024-05-15 12:30:03.030134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.666 [2024-05-15 12:30:03.030147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.666 [2024-05-15 12:30:03.032876] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.666 [2024-05-15 12:30:03.041802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.666 [2024-05-15 12:30:03.042365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.042747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.042762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.666 [2024-05-15 12:30:03.042776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.666 [2024-05-15 12:30:03.042960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.666 [2024-05-15 12:30:03.043142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.666 [2024-05-15 12:30:03.043155] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.666 [2024-05-15 12:30:03.043168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.666 [2024-05-15 12:30:03.045885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.666 [2024-05-15 12:30:03.054815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.666 [2024-05-15 12:30:03.055395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.055704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.055720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.666 [2024-05-15 12:30:03.055734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.666 [2024-05-15 12:30:03.055918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.666 [2024-05-15 12:30:03.056098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.666 [2024-05-15 12:30:03.056110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.666 [2024-05-15 12:30:03.056124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.666 [2024-05-15 12:30:03.058848] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.666 [2024-05-15 12:30:03.067765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.666 [2024-05-15 12:30:03.068348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.068728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.068744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.666 [2024-05-15 12:30:03.068758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.666 [2024-05-15 12:30:03.068943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.666 [2024-05-15 12:30:03.069122] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.666 [2024-05-15 12:30:03.069134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.666 [2024-05-15 12:30:03.069147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.666 [2024-05-15 12:30:03.071863] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.666 [2024-05-15 12:30:03.080792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.666 [2024-05-15 12:30:03.081446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.081826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.081841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.666 [2024-05-15 12:30:03.081855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.666 [2024-05-15 12:30:03.082039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.666 [2024-05-15 12:30:03.082224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.666 [2024-05-15 12:30:03.082240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.666 [2024-05-15 12:30:03.082253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.666 [2024-05-15 12:30:03.084961] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.666 [2024-05-15 12:30:03.093726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.666 [2024-05-15 12:30:03.094234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.094648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.094664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.666 [2024-05-15 12:30:03.094679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.666 [2024-05-15 12:30:03.094863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.666 [2024-05-15 12:30:03.095041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.666 [2024-05-15 12:30:03.095054] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.666 [2024-05-15 12:30:03.095067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.666 [2024-05-15 12:30:03.097782] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.666 [2024-05-15 12:30:03.106716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.666 [2024-05-15 12:30:03.107350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.107734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.107750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.666 [2024-05-15 12:30:03.107764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.666 [2024-05-15 12:30:03.107951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.666 [2024-05-15 12:30:03.108131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.666 [2024-05-15 12:30:03.108143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.666 [2024-05-15 12:30:03.108156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.666 [2024-05-15 12:30:03.110866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.666 [2024-05-15 12:30:03.119639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.666 [2024-05-15 12:30:03.120148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.120551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.666 [2024-05-15 12:30:03.120567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.666 [2024-05-15 12:30:03.120581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.666 [2024-05-15 12:30:03.120767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.666 [2024-05-15 12:30:03.120946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.667 [2024-05-15 12:30:03.120958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.667 [2024-05-15 12:30:03.120976] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.667 [2024-05-15 12:30:03.123694] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.667 [2024-05-15 12:30:03.132624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.667 [2024-05-15 12:30:03.133188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.133576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.133591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.667 [2024-05-15 12:30:03.133605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.667 [2024-05-15 12:30:03.133790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.667 [2024-05-15 12:30:03.133969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.667 [2024-05-15 12:30:03.133981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.667 [2024-05-15 12:30:03.133995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.667 [2024-05-15 12:30:03.136715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.667 [2024-05-15 12:30:03.145643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.667 [2024-05-15 12:30:03.145981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.146382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.146400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.667 [2024-05-15 12:30:03.146414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.667 [2024-05-15 12:30:03.146601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.667 [2024-05-15 12:30:03.146780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.667 [2024-05-15 12:30:03.146792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.667 [2024-05-15 12:30:03.146805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.667 [2024-05-15 12:30:03.149525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.667 [2024-05-15 12:30:03.158620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.667 [2024-05-15 12:30:03.159187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.159534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.159550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.667 [2024-05-15 12:30:03.159564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.667 [2024-05-15 12:30:03.159749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.667 [2024-05-15 12:30:03.159928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.667 [2024-05-15 12:30:03.159940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.667 [2024-05-15 12:30:03.159953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.667 [2024-05-15 12:30:03.162677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.667 [2024-05-15 12:30:03.171612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.667 [2024-05-15 12:30:03.172219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.172595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.172610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.667 [2024-05-15 12:30:03.172624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.667 [2024-05-15 12:30:03.172811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.667 [2024-05-15 12:30:03.172989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.667 [2024-05-15 12:30:03.173001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.667 [2024-05-15 12:30:03.173015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.667 [2024-05-15 12:30:03.175733] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.667 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:34.667 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:28:34.667 12:30:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:34.667 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:34.667 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.667 [2024-05-15 12:30:03.184660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.667 [2024-05-15 12:30:03.185180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.185517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.667 [2024-05-15 12:30:03.185532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.667 [2024-05-15 12:30:03.185547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.667 [2024-05-15 12:30:03.185734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.667 [2024-05-15 12:30:03.185914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.667 [2024-05-15 12:30:03.185928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.667 [2024-05-15 12:30:03.185942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.667 [2024-05-15 12:30:03.188655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.927 [2024-05-15 12:30:03.197582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.927 [2024-05-15 12:30:03.198081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.198457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.198474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.927 [2024-05-15 12:30:03.198488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.927 [2024-05-15 12:30:03.198674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.927 [2024-05-15 12:30:03.198854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.927 [2024-05-15 12:30:03.198872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.927 [2024-05-15 12:30:03.198886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.927 [2024-05-15 12:30:03.201605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.927 [2024-05-15 12:30:03.210523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.927 [2024-05-15 12:30:03.211036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.211413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.211429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.927 [2024-05-15 12:30:03.211442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.927 [2024-05-15 12:30:03.211628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.927 [2024-05-15 12:30:03.211807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.927 [2024-05-15 12:30:03.211819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.927 [2024-05-15 12:30:03.211833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.927 [2024-05-15 12:30:03.214552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.927 [2024-05-15 12:30:03.223484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.927 [2024-05-15 12:30:03.223984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.224388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.224404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.927 [2024-05-15 12:30:03.224418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.927 [2024-05-15 12:30:03.224604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.927 [2024-05-15 12:30:03.224783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.927 [2024-05-15 12:30:03.224796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.927 [2024-05-15 12:30:03.224809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.927 12:30:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.927 12:30:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.927 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.927 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.927 [2024-05-15 12:30:03.227530] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.927 [2024-05-15 12:30:03.230253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.927 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.927 12:30:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.927 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.927 [2024-05-15 12:30:03.236471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.927 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.927 [2024-05-15 12:30:03.237049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.237376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.237392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.927 [2024-05-15 12:30:03.237407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.927 [2024-05-15 12:30:03.237593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.927 [2024-05-15 12:30:03.237771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.927 [2024-05-15 12:30:03.237783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.927 [2024-05-15 12:30:03.237796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.927 [2024-05-15 12:30:03.240507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.927 [2024-05-15 12:30:03.249441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.927 [2024-05-15 12:30:03.250023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.250338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.250354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.927 [2024-05-15 12:30:03.250369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.927 [2024-05-15 12:30:03.250553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.927 [2024-05-15 12:30:03.250733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.927 [2024-05-15 12:30:03.250745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.927 [2024-05-15 12:30:03.250758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.927 [2024-05-15 12:30:03.253486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.927 [2024-05-15 12:30:03.262418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.927 [2024-05-15 12:30:03.262827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.263262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.927 [2024-05-15 12:30:03.263279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.927 [2024-05-15 12:30:03.263293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.927 [2024-05-15 12:30:03.263482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.927 [2024-05-15 12:30:03.263661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.927 [2024-05-15 12:30:03.263673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.927 [2024-05-15 12:30:03.263687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.927 [2024-05-15 12:30:03.266408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.928 Malloc0 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.928 [2024-05-15 12:30:03.275334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.928 [2024-05-15 12:30:03.275889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.928 [2024-05-15 12:30:03.276285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.928 [2024-05-15 12:30:03.276302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.928 [2024-05-15 12:30:03.276316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.928 [2024-05-15 12:30:03.276503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.928 [2024-05-15 12:30:03.276681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.928 [2024-05-15 12:30:03.276693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.928 [2024-05-15 12:30:03.276706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.928 [2024-05-15 12:30:03.279424] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.928 [2024-05-15 12:30:03.288344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:34.928 [2024-05-15 12:30:03.288956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.928 [2024-05-15 12:30:03.289361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.928 [2024-05-15 12:30:03.289376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c579f0 with addr=10.0.0.2, port=4420 00:28:34.928 [2024-05-15 12:30:03.289390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c579f0 is same with the state(5) to be set 00:28:34.928 [2024-05-15 12:30:03.289577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c579f0 (9): Bad file descriptor 00:28:34.928 [2024-05-15 12:30:03.289756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.928 [2024-05-15 12:30:03.289768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.928 [2024-05-15 12:30:03.289781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.928 [2024-05-15 12:30:03.292495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.928 [2024-05-15 12:30:03.294958] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:34.928 [2024-05-15 12:30:03.295205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.928 12:30:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2292746 00:28:34.928 [2024-05-15 12:30:03.301269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.186 [2024-05-15 12:30:03.456442] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:43.300 00:28:43.300 Latency(us) 00:28:43.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.300 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.300 Verification LBA range: start 0x0 length 0x4000 00:28:43.300 Nvme1n1 : 15.01 8581.07 33.52 12547.13 0.00 6037.76 1035.47 25270.68 00:28:43.300 =================================================================================================================== 00:28:43.300 Total : 8581.07 33.52 12547.13 0.00 6037.76 1035.47 25270.68 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:43.558 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:43.559 rmmod nvme_tcp 00:28:43.559 rmmod nvme_fabrics 00:28:43.559 rmmod nvme_keyring 00:28:43.559 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2293933 ']' 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2293933 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' -z 2293933 ']' 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # kill -0 2293933 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # uname 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2293933 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2293933' 00:28:43.817 killing process with pid 2293933 00:28:43.817 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # kill 2293933 00:28:43.817 [2024-05-15 12:30:12.153483] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:43.818 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@971 -- # wait 2293933 00:28:44.076 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:44.076 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.076 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.076 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.076 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.076 12:30:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.076 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.076 12:30:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.980 12:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:45.980 00:28:45.980 real 0m27.489s 00:28:45.981 user 1m2.902s 00:28:45.981 sys 0m8.015s 00:28:45.981 12:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:45.981 12:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:45.981 ************************************ 00:28:45.981 END TEST nvmf_bdevperf 00:28:45.981 ************************************ 00:28:45.981 12:30:14 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:45.981 12:30:14 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:45.981 12:30:14 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:45.981 12:30:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:46.239 ************************************ 00:28:46.239 START TEST nvmf_target_disconnect 00:28:46.239 ************************************ 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:46.239 * Looking for test storage... 00:28:46.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.239 12:30:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.240 12:30:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:52.825 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:52.825 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:52.825 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:52.825 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:52.825 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:52.825 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:52.825 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:52.825 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:52.826 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:52.826 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:52.826 Found net devices under 0000:af:00.0: cvl_0_0 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:52.826 Found net devices under 0000:af:00.1: cvl_0_1 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:52.826 12:30:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:52.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:28:52.826 00:28:52.826 --- 10.0.0.2 ping statistics --- 00:28:52.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.826 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:28:52.826 00:28:52.826 --- 10.0.0.1 ping statistics --- 00:28:52.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.826 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:52.826 ************************************ 00:28:52.826 START TEST nvmf_target_disconnect_tc1 00:28:52.826 ************************************ 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc1 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:52.826 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.827 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.827 [2024-05-15 12:30:21.285609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.827 [2024-05-15 12:30:21.286242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.827 [2024-05-15 12:30:21.286322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc34b0 with addr=10.0.0.2, port=4420 00:28:52.827 [2024-05-15 12:30:21.286383] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:52.827 [2024-05-15 12:30:21.286404] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:52.827 [2024-05-15 12:30:21.286417] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:52.827 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:52.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:52.827 Initializing NVMe Controllers 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:52.827 00:28:52.827 real 0m0.119s 00:28:52.827 user 0m0.045s 00:28:52.827 sys 0m0.074s 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:52.827 ************************************ 00:28:52.827 END TEST nvmf_target_disconnect_tc1 00:28:52.827 ************************************ 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:52.827 12:30:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:53.085 ************************************ 00:28:53.085 START TEST nvmf_target_disconnect_tc2 00:28:53.085 ************************************ 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc2 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2299682 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2299682 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2299682 ']' 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:53.085 12:30:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.085 [2024-05-15 12:30:21.440277] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:53.085 [2024-05-15 12:30:21.440325] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.085 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.085 [2024-05-15 12:30:21.531025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.085 [2024-05-15 12:30:21.605233] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.085 [2024-05-15 12:30:21.605272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.085 [2024-05-15 12:30:21.605281] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.085 [2024-05-15 12:30:21.605290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.085 [2024-05-15 12:30:21.605297] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.086 [2024-05-15 12:30:21.605363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:53.086 [2024-05-15 12:30:21.605471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:53.086 [2024-05-15 12:30:21.605582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:53.086 [2024-05-15 12:30:21.605583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.019 Malloc0 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.019 [2024-05-15 12:30:22.328230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.019 [2024-05-15 12:30:22.356256] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:54.019 [2024-05-15 12:30:22.356516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2299913 00:28:54.019 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:54.020 12:30:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.020 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.920 12:30:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2299682 00:28:55.920 12:30:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Write completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Write completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Write completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Write completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Write completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Write completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Write completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Write completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 [2024-05-15 12:30:24.394226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.920 starting I/O failed 00:28:55.920 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 [2024-05-15 12:30:24.394465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 [2024-05-15 12:30:24.394684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Write completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 Read completed with error (sct=0, sc=8) 00:28:55.921 starting I/O failed 00:28:55.921 [2024-05-15 12:30:24.394909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:55.921 [2024-05-15 12:30:24.395425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.921 [2024-05-15 12:30:24.395926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.921 [2024-05-15 12:30:24.395983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:55.921 qpair failed and we were unable to recover it. 00:28:55.921 [2024-05-15 12:30:24.396498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.921 [2024-05-15 12:30:24.396879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.921 [2024-05-15 12:30:24.396922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.921 qpair failed and we were unable to recover it. 00:28:55.921 [2024-05-15 12:30:24.397346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.921 [2024-05-15 12:30:24.397842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.921 [2024-05-15 12:30:24.397881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.921 qpair failed and we were unable to recover it. 00:28:55.921 [2024-05-15 12:30:24.398386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.921 [2024-05-15 12:30:24.398738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.398754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.399044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.399515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.399555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.399984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.400404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.400443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.400879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.401368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.401408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.401814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.402219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.402258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.402766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.403281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.403321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.403737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.404160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.404223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.404669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.405127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.405143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.405496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.405926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.405965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.406468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.406941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.406978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.407395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.407774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.407813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.408320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.408789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.408828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.409304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.409798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.409837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.410314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.410756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.410795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.411288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.411758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.411797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.412256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.412509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.412548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.412917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.413122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.413138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.413521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.414008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.414048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.414429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.414777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.414815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.415318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.415787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.415827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.416306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.416738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.416777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.417254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.417748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.417788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.418230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.418701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.418739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.419237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.419742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.419780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.420266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.420618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.420658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.421157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.421662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.421701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.422211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.422695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.422733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.423155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.423579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.423621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.424050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.424408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.424448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.922 qpair failed and we were unable to recover it. 00:28:55.922 [2024-05-15 12:30:24.424936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.922 [2024-05-15 12:30:24.425408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.425447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.425971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.426454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.426493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.426868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.427273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.427312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.427786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.428277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.428317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.428762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.429223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.429239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.429729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.430229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.430268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.430767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.431012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.431050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.431425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.431839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.431855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.432219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.432513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.432529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.432911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.433380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.433420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.433858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.434351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.434391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.434746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.435215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.435254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.435679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.436149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.436165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.436543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.436983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.436998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.437456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.437940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.437978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.438390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.438812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.438851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.439295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.439650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.439689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.440115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.440629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.440668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.441146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.441585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.441626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.442149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.442667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.442706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.443152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.443608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.443667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.444114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.444487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.444503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.444873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.445328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.445368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.445816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.446252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.446271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.446599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.447082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.923 [2024-05-15 12:30:24.447098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:55.923 qpair failed and we were unable to recover it. 00:28:55.923 [2024-05-15 12:30:24.447469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.447890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.447910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.448235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.448586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.448603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.448961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.449414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.449455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.449879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.450370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.450411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.450790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.451259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.451299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.451796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.452212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.452252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.452673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.453104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.453143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.453579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.454048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.454087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.454456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.454935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.454987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.455482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.455847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.455863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.456295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.456784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.456823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.457241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.457663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.457702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.458134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.458551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.458592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.459018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.459510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.459551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.192 qpair failed and we were unable to recover it. 00:28:56.192 [2024-05-15 12:30:24.459990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.192 [2024-05-15 12:30:24.460458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.460497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.460999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.461360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.461399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.461823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.462306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.462322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.462712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.463206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.463246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.463751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.464225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.464241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.464693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.465181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.465228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.465640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.466105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.466144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.466572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.467078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.467094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.467568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.467988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.468027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.468523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.468792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.468807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.469239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.469729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.469767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.470187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.470700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.470739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.471254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.471724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.471763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.472187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.472461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.472500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.472976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.473441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.473482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.473959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.474427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.474466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.474897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.475310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.475350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.475770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.476257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.476297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.476738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.477230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.477275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.477649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.478073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.478110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.478616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.479112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.479151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.479586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.479992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.480008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.480467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.480864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.480904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.481323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.481722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.481761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.482182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.482658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.482697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.483173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.483708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.483747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.484215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.484684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.484722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.485222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.485620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.485657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.486087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.486557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.486603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.193 [2024-05-15 12:30:24.487040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.487476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.193 [2024-05-15 12:30:24.487516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.193 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.488024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.488490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.488529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.488977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.489335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.489375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.489818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.490299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.490315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.490646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.491004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.491042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.491494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.491740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.491779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.492273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.492762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.492801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.493226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.493735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.493775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.494258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.494752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.494795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.495151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.495640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.495685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.496162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.496565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.496605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.496868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.497316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.497356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.497833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.498299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.498338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.498771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.499211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.499250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.499783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.500126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.500164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.500671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.500916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.500955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.501434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.501904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.501943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.502357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.502847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.502886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.503309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.503758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.503797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.504274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.504753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.504797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.505213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.505707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.505757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.506237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.506655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.506694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.507102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.507460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.507500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.508004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.508496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.508535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.509029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.509497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.509536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.509948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.510344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.510383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.510764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.511251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.511290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.511727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.512140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.512179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.512693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.513183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.513236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.513659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.514130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.514169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.194 qpair failed and we were unable to recover it. 00:28:56.194 [2024-05-15 12:30:24.514540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.194 [2024-05-15 12:30:24.515017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.515054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.515527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.515944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.515983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.516414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.516878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.516916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.517391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.517881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.517919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.518327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.518826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.518871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.519264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.519441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.519480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.519986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.520475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.520515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.520935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.521422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.521462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.521869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.522333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.522373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.522866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.523378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.523418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.523625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.524040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.524079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.524578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.524995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.525034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.525448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.525929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.525968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.526464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.526957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.526996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.527494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.527937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.527976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.528467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.528919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.528958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.529399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.529889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.529927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.530412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.530833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.530872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.531286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.531759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.531798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.532272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.532694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.532732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.533151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.533669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.533709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.534211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.534703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.534742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.535231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.535606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.535645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.536119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.536558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.536599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.537005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.537477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.537516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.537939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.538426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.538466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.538944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.539338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.539354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.539776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.540217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.540233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.540691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.541095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.541133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.195 qpair failed and we were unable to recover it. 00:28:56.195 [2024-05-15 12:30:24.541592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.541854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.195 [2024-05-15 12:30:24.541893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.542397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.542833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.542871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.543334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.543848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.543887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.544386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.544737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.544775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.545265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.545713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.545751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.546251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.546674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.546713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.546983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.547393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.547409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.547786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.548230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.548269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.548720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.549118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.549156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.549521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.549916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.549953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.550430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.550829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.550868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.551120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.551503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.551541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.552007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.552486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.552525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.552949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.553412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.553451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.553930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.554397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.554436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.554864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.555287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.555336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.555742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.556214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.556254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.556756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.557171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.557217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.557663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.558153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.558197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.558577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.559045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.559084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.559506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.559973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.560011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.560507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.560967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.561006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.561530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.562046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.562086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.562507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.562995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.563034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.563463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.563814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.563853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.196 qpair failed and we were unable to recover it. 00:28:56.196 [2024-05-15 12:30:24.564354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.564713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.196 [2024-05-15 12:30:24.564752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.565262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.565753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.565792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.566228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.566720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.566758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.567253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.567622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.567661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.568185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.568716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.568755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.569300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.569766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.569804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.570314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.570760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.570798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.571216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.571704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.571743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.572260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.572628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.572667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.572940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.573429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.573469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.573973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.574458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.574497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.574904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.575397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.575436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.575912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.576338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.576378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.576785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.577271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.577311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.577735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.578165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.578211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.578711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.579125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.579164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.579680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.580158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.580206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.580700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.581146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.581185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.581466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.581795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.581833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.582331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.582832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.582871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.583300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.583791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.583829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.584240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.584712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.584750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.585249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.585666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.585705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.586200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.586649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.586689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.587201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.587552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.587591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.588110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.588567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.588606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.589018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.589442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.589482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.589919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.590385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.590424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.197 qpair failed and we were unable to recover it. 00:28:56.197 [2024-05-15 12:30:24.590677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.197 [2024-05-15 12:30:24.591018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.591056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.591429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.591919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.591958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.592363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.592858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.592897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.593404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.593817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.593856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.594115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.594515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.594531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.594927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.595371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.595411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.595847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.596211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.596251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.596729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.597211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.597251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.597727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.598140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.598179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.598594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.598997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.599035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.599523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.599882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.599921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.600345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.600838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.600877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.601251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.601650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.601689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.602123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.602525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.602541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.602856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.603211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.603250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.603684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.604177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.604240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.604681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.605008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.605046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.605521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.605938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.605976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.606473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.606735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.606774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.607218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.607638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.607677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.608152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.608568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.608584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.609037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.609527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.609567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.609819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.610174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.610221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.610664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.611155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.611200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.611613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.612018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.612065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.612392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.612863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.612902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.613379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.613805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.613844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.614341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.614615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.614654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.615152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.615690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.615730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.616210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.616699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.198 [2024-05-15 12:30:24.616738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.198 qpair failed and we were unable to recover it. 00:28:56.198 [2024-05-15 12:30:24.617109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.617575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.617615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.618113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.618579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.618619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.619093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.619524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.619564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.620062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.620483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.620523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.621022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.621498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.621538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.622014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.622429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.622469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.622914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.623329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.623368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.623786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.624286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.624302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.624753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.625250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.625296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.625785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.626301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.626341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.626824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.627240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.627279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.627756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.628247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.628286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.628707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.629109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.629148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5b98000b90 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.629742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.630275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.630299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.630719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.631156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.631216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.631604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.631984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.632030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.632512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.632978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.633024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.633542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.634030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.634068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.634437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.634880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.634935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.635365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.635867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.635918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.636439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.636860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.636906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.637364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.637675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.637696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.638141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.638606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.638652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.639185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.639584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.639603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.640069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.640576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.640622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.641099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.641468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.641515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.641966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.642468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.642487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.642875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.643320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.643367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.643877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.644409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.644428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.199 [2024-05-15 12:30:24.644876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.645253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.199 [2024-05-15 12:30:24.645272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.199 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.645684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.646044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.646063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.646447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.646900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.646920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.647401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.647856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.647876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.648257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.648688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.648706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.649170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.649560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.649579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.650042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.650252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.650271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.650661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.651120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.651138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.651522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.651848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.651866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.652352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.652732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.652749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.653204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.653646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.653661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.654045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.654410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.654426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.654777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.655135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.655151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.655591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.655932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.655947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.656405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.656789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.656803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.657149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.657594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.657609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.658033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.658393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.658408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.658830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.659170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.659186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.659503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.659918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.659933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.660277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.660665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.660679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.661059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.661492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.661506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.661868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.662331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.662346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.662714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.663073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.663087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.663507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.663927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.663941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.664288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.664728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.664742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.665107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.665463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.665477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.665846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.666233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.666247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.666664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.667031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.667045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.667390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.667826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.667840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.668205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.668644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.668658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.200 qpair failed and we were unable to recover it. 00:28:56.200 [2024-05-15 12:30:24.669043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.200 [2024-05-15 12:30:24.669318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.669333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.669773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.670229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.670243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.670610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.670999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.671013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.671357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.671708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.671722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.672083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.672523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.672537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.672929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.673369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.673384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.673803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.674143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.674157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.674576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.674922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.674936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.675373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.675788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.675802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.676015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.676381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.676395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.676702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.677083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.677097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.677536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.677919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.677934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.678353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.678703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.678717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.679102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.679518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.679532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.679979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.680331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.680345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.680762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.681123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.681138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.681436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.681646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.681660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.682008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.682445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.682459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.682806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.683245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.683260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.683699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.684136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.684151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.684598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.685015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.685029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.685401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.685842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.685857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.686206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.686646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.686660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.687102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.687542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.687558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.687920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.688290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.201 [2024-05-15 12:30:24.688304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.201 qpair failed and we were unable to recover it. 00:28:56.201 [2024-05-15 12:30:24.688741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.689156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.689170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.689552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.689970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.689985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.690276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.690732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.690747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.691111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.691461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.691475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.691915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.692328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.692343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.692761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.693180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.693200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.693570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.693924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.693938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.694303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.694737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.694784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.695162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.697524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.697550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.698003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.698451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.698500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.698939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.699340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.699388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.699886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.700373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.700385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.700798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.701286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.701325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.701841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.702242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.702282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.702696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.703111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.703149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.703516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.703967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.703978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.704344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.704703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.704741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.705242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.705695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.705736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.706217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.706659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.706697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.707173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.707599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.707638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.707948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.708259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.708271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.708705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.709172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.709246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.709684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.710048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.710059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.710405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.710765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.710776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.202 [2024-05-15 12:30:24.711150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.711565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.202 [2024-05-15 12:30:24.711605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.202 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.712012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.712221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.712233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.712647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.713058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.713069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.713367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.713717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.713756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.714155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.714518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.714530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.714823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.715223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.715263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.715602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.716021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.716060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.716536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.716951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.716989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.717493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.717838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.717876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.718300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.718733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.718771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.719248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.719646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.719684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.720062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.720553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.720598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.721120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.721582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.721622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.722051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.722479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.722519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.723003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.723517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.723556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.724033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.724461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.724500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.724978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.725341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.725381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.725783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.726084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.726122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.726537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.727004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.727042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.727517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.727931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.727969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.728394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.728882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.728920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-05-15 12:30:24.729352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.729759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-05-15 12:30:24.729804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.730282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.730771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.730810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.731218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.731638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.731677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.732174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.732563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.732602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.733023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.733439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.733478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.733827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.734242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.734254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.734593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.734940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.734952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.735311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.735668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.735679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.735970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.736325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.736337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.736790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.737257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.737295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.737639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.738035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.738079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.738503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.738918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.738956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.739303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.739725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.739763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.740261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.740658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.740696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.741141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.741621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.741660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.742083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.742496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.742535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.742956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.743303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.743342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.743820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.744176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.744225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.744640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.745105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.745143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.745418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.745884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.745922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.746447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.746850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.746888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.747314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.747739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.747778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.748202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.748614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.748653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.749152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.749579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.749624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.750070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.750335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.750375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.750873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.751361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.751373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.751792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.752257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.752296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.752775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.753193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.753205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.753570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.754002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.754014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-05-15 12:30:24.754384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-05-15 12:30:24.754775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.754813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.755239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.755653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.755692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.756197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.756420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.756460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.756866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.757279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.757318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.757813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.758224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.758264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.758763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.759181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.759232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.759635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.760066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.760104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.760583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.761047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.761058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.761352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.761770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.761808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.762239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.762711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.762749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.763262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.763616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.763654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.764029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.764379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.764419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.764922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.765411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.765450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.765864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.766315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.766327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.766701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.767117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.767156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.767596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.768030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.768041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.768488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.768977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.769015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.769448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.769916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.769955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.770293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.770628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.770639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.771085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.771576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.771616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.772039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.772523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.772562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.773048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.773518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.773556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.773990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.774403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.774443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.774858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.775294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.775334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.775748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.776214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.776254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.776674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.777142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.777181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.777670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.777913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.777951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.778360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.778851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.778889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.779232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.779645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.779683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-05-15 12:30:24.780213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.780699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-05-15 12:30:24.780737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.781260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.781674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.781713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.782136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.782647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.782687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.783115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.783603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.783649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.784026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.784420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.784459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.784958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.785376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.785418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.785610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.786039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.786077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.786545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.786977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.787016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.787490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.787885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.787923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.788280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.788672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.788710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.789232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.789651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.789689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.790132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.790619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.790658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.791132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.791509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.791548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.791977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.792464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.792503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.793022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.793494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.793534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.794033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.794500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.794539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.795015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.795496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.795507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.795713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.796082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.796120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.796561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.797049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.797087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.797585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.798054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.798093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.798497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.798983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.799021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.799517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.799983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.800022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.800446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.800929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.800967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.801471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.801963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.802002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.802505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.802990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.803028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.803390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.803879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.803917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.804348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.804786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.804825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.805060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.805502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.805542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.806017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.806484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.806524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.807024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.807516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.807555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-05-15 12:30:24.807978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-05-15 12:30:24.808485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.808524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.808959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.809424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.809463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.809889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.810315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.810354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.810835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.811321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.811359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.811796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.812271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.812309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.812807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.813321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.813361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.813841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.814325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.814365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.814803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.815302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.815341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.815841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.816332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.816370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.816794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.817301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.817340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.817837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.818330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.818369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.818862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.819209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.819249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.819450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.819863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.819903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.820299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.820667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.820706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.821127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.821657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.821697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.822173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.822612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.822650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.823149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.823571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.823609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.824028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.824439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.824479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.824956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.825365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.825405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.825756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.826159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.826205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.826712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.827208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.827247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.827734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.828142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.828180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-05-15 12:30:24.828685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.829204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-05-15 12:30:24.829243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.829747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.830229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.830269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.830770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.831250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.831290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.831714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.832155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.832200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.832454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.832946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.832985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.833403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.833753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.833791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.834266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.834729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.834767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.835036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.835478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.835516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.836012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.836502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.836541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.836991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.837480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.837520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.838017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.838416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.838457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.838955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.839375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.839415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.839908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.840414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.840465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.840886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.841375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.841415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.841891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.842402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.842441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.842943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.843460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.843499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.844020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.844514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.844553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.844971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.845439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.845496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.845998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.846408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.846452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.846957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.847454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.847503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.847972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.848457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.848496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.848926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.849272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.849283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.849706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.850114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.850152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.850588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.851061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.851098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.851577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.851822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.851860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.852361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.852826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.852865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.853356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.853798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.853836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.854333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.854762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.854800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.855274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.855688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.855736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.856170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.856591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-05-15 12:30:24.856630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-05-15 12:30:24.857056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.857535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.857575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.858098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.858581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.858627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.859122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.859548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.859588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.859963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.860431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.860471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.860904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.861317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.861356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.861852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.862324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.862363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.862766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.863201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.863240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.863692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.864117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.864156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.864670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.865161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.865209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.865698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.866109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.866147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.866657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.867049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.867086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.867512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.867978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.868022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.868492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.868702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.868740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.868990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.869332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.869371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.869874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.870342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.870381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.870826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.871231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.871270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.871700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.872121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.872159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.872590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.872985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.873023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.873385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.873822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.873860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.874121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.874607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.874658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.875044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.875435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.875475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.875964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.876400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.876445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.876920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.877328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.877340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.877707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.878095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.878133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.878507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.878976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.879013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.879509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.879967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.880006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.880527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.880999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.881038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.881465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.881955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.881994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.882491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.882894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.882934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-05-15 12:30:24.883408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.883831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-05-15 12:30:24.883869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.884297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.884720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.884758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.884946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.885347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.885392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.885911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.886282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.886321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.886756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.887164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.887210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.887686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.888095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.888134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.888505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.888997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.889036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.889491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.889667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.889706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.890133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.890483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.890522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.890934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.891422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.891462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.891836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.892323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.892362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.892717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.893160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.893206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.893710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.894070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.894108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.894503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.894974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.895025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.895484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.895898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.895911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.896365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.896731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.896745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.897178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.897545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.897559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.897999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.898415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.898431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.898786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.899153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.899166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.899600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.900039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.900055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.900474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.900845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.900860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.901232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.901515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.901529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.901945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.902254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.902268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.902711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.903065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.903079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.903382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.903519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.903533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.903977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.904343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.904358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.904792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.905209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.905226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.905642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.906071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.906085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.906446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.906824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.906838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.907109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.907529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-05-15 12:30:24.907544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-05-15 12:30:24.907807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.908208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.908222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.908589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.908975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.908989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.909361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.909787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.909801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.910237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.910676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.910691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.910836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.911279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.911295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.911656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.911804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.911817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.912126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.912568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.912582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.912883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.913249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.913263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.913659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.914076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.914090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.914258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.914464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.914479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.914915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.915352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.915366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.915731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.916083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.916097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.916397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.916757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.916771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.917136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.917448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.917462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.917916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.918327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.918342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.918759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.919199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.919213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.919436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.919749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.919763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.920200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.920580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.920593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.921031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.921465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.921481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.921864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.922276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.922291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.922733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.923151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.923166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.923528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.923870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.923886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.924253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.924547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.924561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.925004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.925369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.925385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.925827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.926125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.926140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.926584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.927021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.927037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.927478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.927861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.927875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.928197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.928614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.928628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.929075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.929288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.929304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-05-15 12:30:24.929744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.930109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-05-15 12:30:24.930123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.930437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.930858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.930873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.931215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.931648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.931661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.932104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.932454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.932468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.932823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.933238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.933252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.933619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.934052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.934067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.934484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.934844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.934859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.935297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.935671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.935686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.936102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.936472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.936486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.936856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.937238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.937252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.937629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.937841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.937855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.938240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.938608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.938622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.939055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.939536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.939583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.940095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.940545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.940591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.941013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.941360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.941374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.941749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.942164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.942240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.942752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.943221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.943268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.943807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.944263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.944309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.944742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.945239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.945278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.945778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.946201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.946241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.946743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.947138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.947175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.947619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.948088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.948126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.948562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.948981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.949019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.949495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.950002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.950049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.950470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.950852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.950890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-05-15 12:30:24.951391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.951827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-05-15 12:30:24.951838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.952251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.952659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.952697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.953178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.953644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.953683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.954158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.954577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.954616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.955114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.955626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.955665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.956089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.956580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.956619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.957024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.957500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.957539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.958058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.958474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.958531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.958894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.959403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.959443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.959810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.960305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.960316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.960676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.961161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.961206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.961679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.962201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.962241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.962758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.963252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.963291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.963793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.964278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.964317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.964821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.965316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.965355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.965863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.966282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.966321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.966660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.967156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.967210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.967622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.968119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.968157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.968606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.969028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.969066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.969509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.970008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.970046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.970499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.970892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.970929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.971405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.971896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.971935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.972411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.972919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.972957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.973363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.973629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.973668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.973922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.974379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.974419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.974861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.975282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.975321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.975816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.976227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.976267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.976740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.977171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.977218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.977637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.977975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.978013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.978448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.978857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.978895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-05-15 12:30:24.979293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-05-15 12:30:24.979600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.979639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.979911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.980306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.980345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.980842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.981332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.981372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.981726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.982124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.982162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.982596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.983079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.983120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.983463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.983906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.983945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.984386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.984808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.984846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.985332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.985469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.985507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.985938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.986382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.986421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.986873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.987281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.987292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.987646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.988057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.988069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.988440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.988863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.988901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.989298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.989730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.989742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.990098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.990514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.990552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-05-15 12:30:24.991076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.991465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-05-15 12:30:24.991477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.991838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.992200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.992211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.992645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.993080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.993091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.993512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.993912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.993950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.994442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.994858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.994896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.995385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.995749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.995762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.996132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.996634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.996673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.997173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.997542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.997582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.997924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.998413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.998453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.998893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.999144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.999180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:24.999571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.999938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:24.999976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.000471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.000875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.000914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.001241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.001651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.001662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.002013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.002446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.002486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.003012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.003523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.003562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.003997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.004379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.004425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.004842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.005253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.005292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.005789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.006211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.006250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.006751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.007227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.007267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.007768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.008178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.008240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.008669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.009134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.009173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.009429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.009871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.009909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.010098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.010542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.010581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.011048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.011426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.011438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.011729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.012172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.012220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.012641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.013126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.013170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.013616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.014109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.014147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.746 qpair failed and we were unable to recover it. 00:28:56.746 [2024-05-15 12:30:25.014662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.746 [2024-05-15 12:30:25.015122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.015160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.015619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.016145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.016182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.016550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.016901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.016939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.017364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.017559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.017597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.018001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.018414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.018453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.018861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.019277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.019316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.019682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.020170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.020216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.020665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.020910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.020948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.021315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.021754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.021798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.022240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.022728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.022767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.023245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.023764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.023802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.024053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.024542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.024581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.024934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.025362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.025401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.025815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.026304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.026344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.026838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.027262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.027274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.027462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.027839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.027878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.028151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.028666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.028705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.028916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.029407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.029446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.029854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.030343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.030382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.030792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.031282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.031320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.031824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.032310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.032349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.032850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.033272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.033311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.033808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.034213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.034252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.034814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.035215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.035258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.035647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.036089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.036127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.036441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.036932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.036970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.037233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.037666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.037704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.038210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.038681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.038719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.039202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.039597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.039608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.747 [2024-05-15 12:30:25.040029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.040450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.747 [2024-05-15 12:30:25.040490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.747 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.040796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.041284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.041323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.041806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.042221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.042260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.042645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.043153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.043200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.043564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.043976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.044015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.044419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.044822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.044861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.045336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.045790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.045828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.046264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.046676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.046714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.047084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.047336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.047347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.047708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.048212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.048252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.048544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.048948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.048986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.049481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.049948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.049987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.050386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.050776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.050815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.051216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.051575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.051614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.051965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.052455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.052494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.052929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.053366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.053420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.053847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.054329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.054341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.054669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.055158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.055201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.055391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.055834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.055873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.056291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.056536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.056574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.057077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.057504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.057544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.058043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.058442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.058481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.058930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.059322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.059361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.059798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.060264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.060304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.060661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.061153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.061197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.061655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.062123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.062161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.062550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.062936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.062974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.063406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.063899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.063937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.064433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.064906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.064945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.065357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.065791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.065829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.748 qpair failed and we were unable to recover it. 00:28:56.748 [2024-05-15 12:30:25.066216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.748 [2024-05-15 12:30:25.066640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.066678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.067172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.067602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.067641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.068139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.068329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.068368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.068815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.069218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.069264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.069664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.070156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.070208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.070639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.070991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.071030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.071526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.071928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.071966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.072378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.072866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.072904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.073378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.073790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.073801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.074145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.074648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.074686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.075188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.075606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.075645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.076143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.076627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.076666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.077092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.077581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.077620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.078043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.078470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.078509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.078915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.079401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.079438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.079885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.080373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.080412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.080665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.081150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.081188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.081604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.082041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.082079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.082577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.082981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.083018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.083495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.083969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.083979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.084422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.084891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.084929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.085464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.085946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.085984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.086407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.086896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.086934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.087430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.087900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.087938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.088360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.088776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.088814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.089281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.089710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.089721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.090084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.090532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.090571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.090999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.091440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.091479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.091977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.092480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.092519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.092963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.093311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.749 [2024-05-15 12:30:25.093350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.749 qpair failed and we were unable to recover it. 00:28:56.749 [2024-05-15 12:30:25.093847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.094300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.094354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.094906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.095423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.095472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.096014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.096449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.096468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.096939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.097454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.097501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.098026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.098529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.098594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.099053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.099541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.099589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.100076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.100508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.100555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.100981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.101441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.101488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.102023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.102464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.102485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.102933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.103378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.103425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.103922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.104305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.104324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.104784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.105168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.105222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.105752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.106258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.106305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.106827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.107287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.107333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.107790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.108099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.108119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.108336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.108716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.108763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.109157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.109591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.109637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.110102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.110612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.110631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.111082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.111567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.111613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.112142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.112638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.112684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.113216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.113689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.113743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.114297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.114672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.114733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.115248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.115682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.115728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.750 qpair failed and we were unable to recover it. 00:28:56.750 [2024-05-15 12:30:25.116209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.116652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.750 [2024-05-15 12:30:25.116698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.117234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.117719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.117765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.118321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.118758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.118804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.119331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.119790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.119840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.120289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.120779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.120825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.121270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.121706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.121752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.122259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.122796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.122841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.123367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.123851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.123870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.124283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.124790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.124836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.125287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.125767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.125785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.126254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.126765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.126812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.127352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.127880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.127927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.128410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.128861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.128907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.129418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.129847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.129893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.130399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.130908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.130953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.131466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.131956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.132003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.132474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.132963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.133009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.133471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.133911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.133930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.134396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.134852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.134897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.135436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.135800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.135821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.136307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.136794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.136839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.137394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.137820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.137839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.138304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.138737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.138784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.139224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.139661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.139717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.140101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.140475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.140523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.140877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.141322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.141341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.141680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.142219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.142267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.142780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.143208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.143255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.143779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.144234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.144281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.144740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.145202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.145251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.751 qpair failed and we were unable to recover it. 00:28:56.751 [2024-05-15 12:30:25.145701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.751 [2024-05-15 12:30:25.146152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.146171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.146624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.147074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.147093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.147529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.147883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.147902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.148342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.148793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.148811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.149218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.149593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.149612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.150070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.150433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.150454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.150814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.151209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.151228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.151666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.152123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.152143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.152579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.153023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.153042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.153433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.153888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.153909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.154284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.154689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.154708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.155080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.155397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.155416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.155735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.156188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.156211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.156622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.157071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.157089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.157437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.157892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.157911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.158300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.158640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.158658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.159032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.159483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.159503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.159719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.160188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.160211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.160650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.161049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.161073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.161455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.161905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.161924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.162297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.162743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.162762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.163170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.163544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.163564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.164023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.164322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.164342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.164744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.165126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.165145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.165559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.165964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.165983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.166369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.166739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.166758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.167162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.167595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.167614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.167998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.168351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.168371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.168822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.169199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.169218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.169590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.170006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.170027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.752 qpair failed and we were unable to recover it. 00:28:56.752 [2024-05-15 12:30:25.170462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.752 [2024-05-15 12:30:25.170832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.170851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.171250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.171679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.171700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.172160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.172540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.172559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.172965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.173329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.173350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.173678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.174159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.174178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.174590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.174968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.174984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.175427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.175865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.175880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.176249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.176610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.176625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.176981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.177417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.177431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.177579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.178040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.178054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.178425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.178719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.178733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.179079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.179497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.179515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.179872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.180249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.180263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.180632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.181007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.181020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.181370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.181717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.181730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.182094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.182437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.182451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.182796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.183164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.183178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.183507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.183904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.183918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.184219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.184662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.184677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.185116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.185282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.185297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.185651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.185957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.185970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.186348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.186786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.186801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.187220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.187585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.187600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.187943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.188382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.188400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.188706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.189106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.189119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.189504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.189866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.189879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.190229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.190665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.190679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.191041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.191239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.191253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.191695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.192059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.192105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.192500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.192900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.753 [2024-05-15 12:30:25.192946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.753 qpair failed and we were unable to recover it. 00:28:56.753 [2024-05-15 12:30:25.193469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.193944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.193990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.194426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.194852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.194897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.195360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.195829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.195868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.196294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.196699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.196737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.197137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.197567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.197607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.198096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.198436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.198448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.198799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.199271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.199311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.199774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.200130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.200141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.200555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.200841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.200879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.201223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.201730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.201768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.202269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.202673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.202712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.203153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.203646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.203658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.204094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.204449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.204487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.204899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.205309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.205321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.205609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.206083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.206121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.206609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.206792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.206804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.207207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.207615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.207653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.208156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.208511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.208550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.208916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.209415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.209455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.209807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.210237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.210278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.210704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.211200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.211239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.211672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.212088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.212127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.212588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.212959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.212998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.213358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.213843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.213881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.214390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.214821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.214859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.215336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.215827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.215874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.216297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.216790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.754 [2024-05-15 12:30:25.216828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.754 qpair failed and we were unable to recover it. 00:28:56.754 [2024-05-15 12:30:25.217028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.217448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.217488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.217987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.218332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.218371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.218812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.219281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.219319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.219744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.220211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.220250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.220769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.221236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.221276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.221754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.222149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.222186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.222571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.222995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.223032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.223529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.223952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.223990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.224364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.224784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.224821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.225238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.225674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.225712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.226123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.226473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.226512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.226860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.227312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.227350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.227826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.228235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.228275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.228770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.229014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.229053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.229475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.229910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.229948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.230387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.230782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.230793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.231116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.231516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.231556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.232034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.232449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.232488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.232966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.233417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.233456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.233653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.234029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.234067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.234487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.234938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.234977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.235418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.235923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.235962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.236390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.236874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.236913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.237439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.237912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.237950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.238390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.238817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.238856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.239231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.239479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.239517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.240012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.240428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.240466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.755 qpair failed and we were unable to recover it. 00:28:56.755 [2024-05-15 12:30:25.240906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.755 [2024-05-15 12:30:25.241396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.241435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.241862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.242300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.242339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.242880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.243264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.243303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.243650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.244049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.244088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.244514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.244950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.244989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.245413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.245821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.245859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.246209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.246608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.246646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.247149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.247587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.247626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.248042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.248460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.248499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.248832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.249319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.249359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.249834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.250245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.250284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.250762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.251171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.251217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.251661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.252058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.252096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.252529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.252890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.252928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.253370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.253884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.253922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.254401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.254912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.254951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.255388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.255879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.255918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.256339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.256823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.256862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.257203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.257617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.257656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.258066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.258487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.258526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.259006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.259367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.259407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.259911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.260314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.260353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.260854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.261187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.261234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.261731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.262146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.262184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.262675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.263184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.263199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.263572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.264040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.264084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.264508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.264939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.264951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.265369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.265758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.265797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.266182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.266679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.266707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.267120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.267422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.756 [2024-05-15 12:30:25.267434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.756 qpair failed and we were unable to recover it. 00:28:56.756 [2024-05-15 12:30:25.267801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-05-15 12:30:25.268161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-05-15 12:30:25.268209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.757 qpair failed and we were unable to recover it. 00:28:56.757 [2024-05-15 12:30:25.268633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-05-15 12:30:25.269060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.757 [2024-05-15 12:30:25.269072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:56.757 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.269486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.269847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.269858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.023 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.270259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.270624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.270635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.023 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.270837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.271280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.271318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.023 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.271740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.272145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.272158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.023 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.272476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.272908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.272946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.023 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.273415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.273828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.273840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.023 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.274203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.274557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.274599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.023 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.275036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.275318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.275330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.023 qpair failed and we were unable to recover it. 00:28:57.023 [2024-05-15 12:30:25.275729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.023 [2024-05-15 12:30:25.276037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.276048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.276413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.276761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.276772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.277198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.277635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.277649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.278063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.278425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.278437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.278741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.279105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.279117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.279456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.279768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.279782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.280154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.280586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.280598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.280990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.281347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.281359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.281665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.282011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.282023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.282456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.282792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.282804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.283094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.283246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.283258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.283696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.283817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.283829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.284116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.284530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.284542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.284857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.285223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.285235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.285600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.285976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.285987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.286351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.286785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.286799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.287160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.287503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.287515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.287927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.288234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.288247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.288545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.288973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.288985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.289422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.289774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.289786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.290138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.290574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.290586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.291021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.291480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.291492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.291899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.292333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.292345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.292753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.293164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.293176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.293617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.294051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.294062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.294428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.294788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.294799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.295221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.295660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.024 [2024-05-15 12:30:25.295671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.024 qpair failed and we were unable to recover it. 00:28:57.024 [2024-05-15 12:30:25.296131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.296514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.296526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.296964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.297376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.297388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.297759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.298198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.298210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.298623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.299059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.299071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.299441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.299785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.299797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.300162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.300597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.300609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.301062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.301497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.301509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.301948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.302406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.302418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.302853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.303292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.303304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.303744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.304204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.304216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.304650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.305028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.305040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.305394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.305830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.305841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.306283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.306697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.306709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.307123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.307535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.307547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.307906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.308286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.308299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.308663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.309019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.309031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.309454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.309772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.309783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.310211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.310622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.310634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.310927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.311370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.311382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.311823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.312195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.312207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.312570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.312931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.312942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.313145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.313581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.313593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.313990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.314408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.314420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.314786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.315199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.315210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.315587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.315954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.315967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.316409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.316853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.316866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.317303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.317717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.317729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.318144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.318579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.318591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.025 qpair failed and we were unable to recover it. 00:28:57.025 [2024-05-15 12:30:25.318973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.025 [2024-05-15 12:30:25.319404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.319417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.319785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.320200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.320213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.320652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.320947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.320958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.321372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.321727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.321740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.322103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.322479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.322491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.322906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.323340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.323353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.323719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.324135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.324147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.324449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.324881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.324893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.325328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.325695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.325707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.325994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.326361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.326374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.326777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.327208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.327236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.327715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.328104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.328116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.328477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.328910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.328921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.329334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.329745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.329756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.330104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.330446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.330459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.330878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.331255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.331268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.331617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.332027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.332040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.332476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.332813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.332825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.333240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.333662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.333674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.334044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.334479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.334491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.334875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.335308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.335320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.335761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.336131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.336143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.336582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.337018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.337029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.337386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.337821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.337832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.338215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.338631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.338642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.339030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.339442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.339455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.339821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.340257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.340269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.340631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.341064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.341076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.341511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.341876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.341888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.026 [2024-05-15 12:30:25.342247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.342546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.026 [2024-05-15 12:30:25.342558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.026 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.342977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.343402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.343416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.343784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.344151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.344163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.344521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.344955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.344967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.345381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.345816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.345828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.346294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.346675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.346687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.347073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.347456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.347469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.347854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.348015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.348028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.348390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.348830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.348842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.349277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.349713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.349724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.350155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.350494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.350506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.350875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.351214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.351226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.351592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.352008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.352020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.352455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.352887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.352899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.353260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.353639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.353651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.354088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.354466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.354478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.354821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.355280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.355293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.355730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.356166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.356178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.356686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.357087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.357110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.357560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.358035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.358061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.358524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.358938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.358959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.359414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.359869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.359896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.360304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.360723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.360735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.361174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.361589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.361602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.362063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.362521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.362534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.362973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.363410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.363422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.363824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.364204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.364232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.364659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.365115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.365127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.365588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.366017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.366030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.366478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.366841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.366852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.027 qpair failed and we were unable to recover it. 00:28:57.027 [2024-05-15 12:30:25.367141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.027 [2024-05-15 12:30:25.367572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.367584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.368022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.368377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.368390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.368830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.369189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.369208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.369613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.370025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.370037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.370393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.370806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.370818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.371178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.371616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.371629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.371945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.372377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.372390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.372805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.373144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.373156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.373590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.373904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.373915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.374272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.374636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.374649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.375007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.375442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.375455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.375827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.376281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.376294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.376450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.376820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.376832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.377312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.377728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.377740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.378177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.378538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.378551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.378986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.379347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.379360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.379553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.380012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.380023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.380449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.380885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.380897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.381262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.381577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.381588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.381970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.382404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.382416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.382835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.383183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.383199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.383570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.383933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.383945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.384402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.384749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.384761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.385175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.385364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.028 [2024-05-15 12:30:25.385376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.028 qpair failed and we were unable to recover it. 00:28:57.028 [2024-05-15 12:30:25.385788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.386166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.386177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.386615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.386956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.386967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.387331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.387694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.387706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.388077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.388507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.388519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.388933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.389363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.389375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.389792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.390240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.390252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.390625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.391059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.391071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.391453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.391890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.391901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.392338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.392641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.392655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.393069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.393489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.393501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.393934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.394249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.394261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.394695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.395044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.395055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.395254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.395606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.395620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.395966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.396346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.396361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.396805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.397221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.397236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.397598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.397980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.397994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.398390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.398758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.398781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.399227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.399589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.399604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.399955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.400316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.400333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.400776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.401118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.401134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.401550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.401964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.401980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.402423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.402841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.402858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.403317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.403547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.403561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.404009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.404325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.404340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.404713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.405170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.405184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.405344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.405632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.405647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.406003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.406387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.406402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.406819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.407267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.407282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.407646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.408085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.408103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.029 qpair failed and we were unable to recover it. 00:28:57.029 [2024-05-15 12:30:25.408474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.029 [2024-05-15 12:30:25.408786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.408800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.409219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.409579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.409594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.410025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.410337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.410352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.410504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.410902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.410916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.411281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.411652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.411666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.412104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.412466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.412480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.412792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.413215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.413229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.413663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.414046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.414060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.414481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.414791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.414806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.415204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.415509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.415524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.415839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.416147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.416161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.416458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.416759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.416774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.417077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.417387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.417402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.417835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.418196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.418212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.418487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.418871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.418887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.419181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.419562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.419578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.419947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.420362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.420377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.420799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.420931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.420946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.421251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.421715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.421729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.422180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.422397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.422413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.422840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.423257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.423271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.423655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.424105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.424120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.424496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.424794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.424809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.425111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.425492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.425508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.425861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.426171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.426185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.426572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.426957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.426971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.427409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.427763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.427780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.428135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.428489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.428506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.428868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.429255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.429270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.429705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.430063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.030 [2024-05-15 12:30:25.430078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.030 qpair failed and we were unable to recover it. 00:28:57.030 [2024-05-15 12:30:25.430449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.430819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.430838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.431170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.431568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.431583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.431895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.432257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.432273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.432687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.433048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.433065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.433507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.433856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.433872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.434291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.434714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.434729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.435011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.435406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.435421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.435732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.436041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.436055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.436472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.436838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.436855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.437147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.437490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.437505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.437878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.438174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.438189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.438613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.438982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.438998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.439361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.439783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.439797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.440171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.440545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.440563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.440862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.441137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.441153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.441526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.441837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.441852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.442234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.442515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.442530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.442843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.443217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.443232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.443544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.443847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.443861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.444280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.444645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.444662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.445013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.445390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.445437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.445888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.446305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.446319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.446680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.447213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.447259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.447599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.448072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.448118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.448546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.449022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.449067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.449419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.449815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.449860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.450242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.450614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.450659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.451080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.451436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.451476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.451966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.452445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.452457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.031 [2024-05-15 12:30:25.452775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.453146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.031 [2024-05-15 12:30:25.453184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.031 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.453462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.453933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.453971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.454421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.454781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.454819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.455232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.455731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.455769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.456244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.456663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.456701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.457127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.457579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.457619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.458051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.458396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.458435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.458918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.459263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.459301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.459801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.460213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.460252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.460603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.461033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.461071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.461530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.461967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.462004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.462438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.462851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.462889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.463224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.463692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.463730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.464181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.464599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.464637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.465077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.465516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.465556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.466009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.466480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.466520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.466870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.467360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.467399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.467821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.468175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.468227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.468658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.469123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.469161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.469596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.469947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.469985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.470471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.470869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.470880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.471306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.471796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.471834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.472312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.472673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.472711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.473232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.473659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.473671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.474116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.474584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.474624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.032 qpair failed and we were unable to recover it. 00:28:57.032 [2024-05-15 12:30:25.475103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.032 [2024-05-15 12:30:25.475620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.475660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.476093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.476557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.476596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.477092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.477511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.477551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.477913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.478174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.478223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.478666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.479083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.479120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.479619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.480130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.480169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.480608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.480976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.481014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.481488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.481828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.481867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.482279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.482767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.482806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.483333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.483698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.483737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.484146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.484524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.484536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.484929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.485303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.485315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.485695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.486215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.486255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.486666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.487082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.487132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.487565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.488056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.488095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.488597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.489131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.489170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.489658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.490080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.490118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.490606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.491076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.491114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.491540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.491954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.491965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.492332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.492704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.492742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.493116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.493541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.493553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.493867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.494303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.494342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.494770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.495189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.495236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.495613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.495970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.496008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.496481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.496968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.497006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.497374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.497722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.497760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.498176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.498605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.498643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.498994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.499460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.499499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.499868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.500284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.500323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.033 [2024-05-15 12:30:25.500667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.501112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.033 [2024-05-15 12:30:25.501150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:57.033 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.501622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.502110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.502158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.502600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.503050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.503089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.503524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.503999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.504037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.504464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.504876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.504914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.505275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.505689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.505727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.506225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.506584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.506622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.506985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.507456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.507504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.507766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.508104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.508150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.508543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.508901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.508939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.509361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.509803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.509843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.510267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.510655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.510694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.511040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.511404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.511443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.511805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.512166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.512215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.512625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.513032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.513070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.513511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.513941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.513980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.514418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.514904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.514943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.515378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.515800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.515851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.516292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.516655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.516692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.517046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.517460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.517499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.517977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.518376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.518415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.518918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.519281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.519320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.519726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.520045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.520061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.520496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.520924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.520962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.521390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.521748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.521764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.522232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.522602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.522641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.523140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.523512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.523529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.523864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.524292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.524331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.524656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.525039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.525077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.525550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.525913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.525950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.034 [2024-05-15 12:30:25.526387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.526749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.034 [2024-05-15 12:30:25.526787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.034 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.527289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.527692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.527730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.528265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.528759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.528797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.529275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.529637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.529675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.530120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.530508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.530547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.531041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.531493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.531532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.531909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.532362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.532401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.532833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.533326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.533364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.533746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.534263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.534302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.534754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.535226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.535265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.535790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.536211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.536250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.536729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.537138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.537176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.537668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.538087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.538125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.538620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.539155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.539203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.539707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.540125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.540163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.540582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.540959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.540975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.541401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.541832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.541870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.542301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.542639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.542678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.543124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.543569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.543586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.543966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.544421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.544460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.035 [2024-05-15 12:30:25.544892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.545382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.035 [2024-05-15 12:30:25.545399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.035 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.545847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.546301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.546318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.546687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.547115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.547131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.547530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.547959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.547975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.548428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.548850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.548889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.549367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.549789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.549827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.550253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.550753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.550791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.551294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.551780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.551818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.552236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.552666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.552704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.553238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.553683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.553722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.554255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.554743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.554781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.555264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.555782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.555821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.556348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.556767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.556806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.557253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.557615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.557653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.558117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.558617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.558656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.559183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.559683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.559722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.301 [2024-05-15 12:30:25.560166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.560692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.301 [2024-05-15 12:30:25.560731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.301 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.561240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.561568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.561611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.562117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.562558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.562603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.563007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.563443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.563482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.563905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.564420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.564437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.564826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.565276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.565315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.565770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.566234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.566272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.566702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.567204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.567221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.567613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.568037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.568052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.568483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.568971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.569010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.569544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.570023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.570061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.570607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.571107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.571146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.571665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.572217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.572256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.572780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.573287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.573303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.573702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.574103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.574142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.574625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.575093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.575131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.575588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.576013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.576051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.576562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.576988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.577027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.577562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.578022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.578060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.578596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.579123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.579161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.579664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.580136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.580174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.580649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.581068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.581107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.581540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.581992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.582008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.582468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.582888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.582926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.583436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.583938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.583976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.584567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.585130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.585168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.585653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.586072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.586111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.586609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.587113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.587152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.587675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.588099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.588138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.588634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.589154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.589170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.302 qpair failed and we were unable to recover it. 00:28:57.302 [2024-05-15 12:30:25.589639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.302 [2024-05-15 12:30:25.590161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.590209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.590737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.591155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.591203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.591684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.592092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.592108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.592517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.593009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.593048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.593587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.594012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.594050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.594597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.595131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.595147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.595537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.596005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.596044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.596472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.596896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.596934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.597418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.597876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.597915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.598423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.598772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.598788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.599233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.599677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.599715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.600258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.600641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.600680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.601101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.601573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.601613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.602046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.602528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.602569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.603100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.603616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.603656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.604215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.604546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.604584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.605035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.605510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.605549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.606004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.606523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.606562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.606958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.607389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.607429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.607851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.608230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.608247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.608637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.609126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.609164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.609660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.610145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.610162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.610662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.611047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.611086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.611540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.611974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.612019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.612554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.613079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.613118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.613564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.613928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.613966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.303 qpair failed and we were unable to recover it. 00:28:57.303 [2024-05-15 12:30:25.614396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.614873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.303 [2024-05-15 12:30:25.614912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.615349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.615746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.615785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.616284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.616702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.616740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.617272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.617726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.617743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.618222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.618689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.618728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.619190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.619652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.619691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.620244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.620760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.620798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.621330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.621875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.621913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.622461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.622995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.623033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.623571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.624032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.624070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.624522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.625028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.625066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.625519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.625958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.625996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.626531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.627062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.627100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.627659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.628086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.628124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.628623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.629100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.629139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.629645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.630141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.630180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.630655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.631138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.631176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.631731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.632179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.632232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.632674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.633081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.633119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.633628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.634149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.634188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.634736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.635221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.635261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.635620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.636134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.636173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.636569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.637017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.637056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.637445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.637901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.637940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.638476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.638988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.639027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.639541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.640049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.640087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.640655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.641086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.641103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.641563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.642072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.642111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.642658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.643096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.643135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.643587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.644046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.644085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.304 qpair failed and we were unable to recover it. 00:28:57.304 [2024-05-15 12:30:25.644623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.304 [2024-05-15 12:30:25.645119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.645157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.645659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.646140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.646157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.646617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.647053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.647070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.647530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.647913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.647930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.648313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.648760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.648776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.649243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.649576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.649592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.650060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.650434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.650450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.650912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.651321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.651339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.651800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.652295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.652313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.652772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.653173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.653189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.653603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.653944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.653960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.654428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.654835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.654851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.655293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.655686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.655702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.656106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.656476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.656493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.656796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.657249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.657266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.657728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.658187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.658207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.658624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.659043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.659060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.659470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.659862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.659879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.660327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.660717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.660738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.661248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.661691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.661708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.662225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.662683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.662699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.663153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.663562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.663579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.663971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.664417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.664434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.664759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.665211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.665227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.665561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.665963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.665980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.305 qpair failed and we were unable to recover it. 00:28:57.305 [2024-05-15 12:30:25.666418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.666807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.305 [2024-05-15 12:30:25.666824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.667287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.667670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.667685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.668073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.668546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.668563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.668952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.669351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.669370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.669756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.670243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.670261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.670722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.671116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.671132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.671561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.671941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.671958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.672415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.672844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.672860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.673318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.673727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.673744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.674081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.674489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.674506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.674909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.675362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.675379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.675711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.676085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.676102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.676480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.676858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.676875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.677333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.677668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.677684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.678159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.678619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.678637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.679073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.679528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.679545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.679978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.680408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.680424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.680760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.681238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.681254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.681751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.682152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.682169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.682562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.683004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.683021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.683484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.683888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.683904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.684357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.684667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.684683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.685226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.685635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.685651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.306 qpair failed and we were unable to recover it. 00:28:57.306 [2024-05-15 12:30:25.685985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.306 [2024-05-15 12:30:25.686436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.686453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.686888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.687330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.687347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.687731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.688110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.688126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.688489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.688912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.688951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.689403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.689820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.689858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.690364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.690823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.690861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.691394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.691820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.691858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.692335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.692795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.692833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.693267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.693790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.693829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.694378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.694802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.694842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.695345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.695774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.695813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.696336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.696857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.696895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.697397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.697845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.697884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.698404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.698902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.698941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.699438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.699773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.699789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.700236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.700661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.700700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.701118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.701656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.701697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.702268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.702696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.702736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.703164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.703594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.703633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.704042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.704510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.704549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.705076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.705502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.705541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.705994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.706354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.706393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.706896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.707415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.707454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.707985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.708460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.708477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.708865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.709322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.709362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.709866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.710386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.710427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.307 [2024-05-15 12:30:25.710812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.711289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.307 [2024-05-15 12:30:25.711329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.307 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.711800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.712301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.712341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.712819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.713317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.713357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.713943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.714404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.714443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.714953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.715379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.715418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.715807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.716281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.716302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.716619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.717070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.717108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.717479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.717923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.717962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.718474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.718925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.718964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.720058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.720561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.720582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.720933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.721375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.721392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.721768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.722141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.722158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.722638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.722971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.722988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.723421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.723814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.723831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.724272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.724705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.724721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.725177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.725594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.725611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.726001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.726457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.726498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.726939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.727412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.727430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.727761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.728158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.728175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.728563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.728946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.728962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.729412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.729815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.729832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.730303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.730690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.730706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.731148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.731530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.731547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.731933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.732306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.732324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.732715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.733258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.733297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.733832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.734334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.734374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.734842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.735261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.735301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.735691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.736161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.736208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.736732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.737180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.737227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.737736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.738265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.738305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.308 qpair failed and we were unable to recover it. 00:28:57.308 [2024-05-15 12:30:25.738687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.739165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.308 [2024-05-15 12:30:25.739214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.739695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.740063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.740101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.740549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.740920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.740958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.741469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.741901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.741942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.742401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.742843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.742882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.743350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.743779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.743818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.744255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.744648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.744687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.745155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.745723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.745763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.746311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.746763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.746801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.747180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.747563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.747603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.748066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.748500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.748539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.748978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.749459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.749475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.749929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.750372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.750411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.750935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.751424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.751441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.751828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.752274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.752313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.752742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.753249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.753289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.753813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.754240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.754280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.754770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.755272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.755312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.755854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.756278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.756318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.756859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.757370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.757409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.757851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.758306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.758346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.758882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.759423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.759440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.759877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.760343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.760383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.760817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.761348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.761388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.761924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.762446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.762463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.762901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.763366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.763406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.763918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.764346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.764391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.764930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.765474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.765533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.765966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.766465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.766505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.767070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.767605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.767645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.309 qpair failed and we were unable to recover it. 00:28:57.309 [2024-05-15 12:30:25.768211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.309 [2024-05-15 12:30:25.768630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.768668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.769057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.769541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.769581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.770035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.770524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.770564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.771120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.771640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.771680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.772237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.772680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.772719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.773272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.773785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.773824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.774356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.774868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.774907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.775479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.775989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.776027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.776562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.776992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.777042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.777526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.777969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.778008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.778520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.779030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.779071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.779541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.780048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.780085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.780595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.781074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.781113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.781583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.782088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.782130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.782599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.783063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.783101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.783642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.784149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.784187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.784725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.785239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.785279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.785848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.786291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.786332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.786871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.787422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.787462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.787977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.788406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.788445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.788983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.789524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.789564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.790066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.790550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.790589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.791108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.791587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.791628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.792161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.792675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.792716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.310 qpair failed and we were unable to recover it. 00:28:57.310 [2024-05-15 12:30:25.793256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.793770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.310 [2024-05-15 12:30:25.793809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.794342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.794770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.794809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.795310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.795840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.795878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.796440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.796876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.796915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.797410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.797830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.797868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.798380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.798890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.798928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.799462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.799986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.800025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.800546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.801048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.801086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.801625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.802053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.802090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.802553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.803015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.803054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.803593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.804076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.804115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.804539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.805068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.805116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.805600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.806014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.806053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.806572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.807011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.807049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.807485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.807914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.807953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.808479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.808940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.808978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.809507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.809942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.809958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.810416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.810860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.810898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.811401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.811933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.811972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.812529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.813035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.813074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.813615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.814111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.814150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.814730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.815156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.815206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.815640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.816128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.816167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.816684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.817216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.817263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.817803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.818287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.818326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.818872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.819379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.819419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.819951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.820472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.820490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.820960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.821383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.821423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.821860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.822361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.822378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.822838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.823296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.823313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.311 qpair failed and we were unable to recover it. 00:28:57.311 [2024-05-15 12:30:25.823736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.311 [2024-05-15 12:30:25.824135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.312 [2024-05-15 12:30:25.824152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.312 qpair failed and we were unable to recover it. 00:28:57.312 [2024-05-15 12:30:25.824593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.577 [2024-05-15 12:30:25.825046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.577 [2024-05-15 12:30:25.825063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.577 qpair failed and we were unable to recover it. 00:28:57.577 [2024-05-15 12:30:25.825451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.577 [2024-05-15 12:30:25.825851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.577 [2024-05-15 12:30:25.825868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.826237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.826621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.826660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.827127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.827641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.827681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.828202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.828728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.828766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.829220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.829729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.829768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.830223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.830634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.830673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.831107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.831611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.831651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.832186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.832643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.832682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.833218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.833682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.833720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.834231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.834752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.834791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.835348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.835773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.835811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.836325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.836834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.836872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.837314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.837795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.837811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.838237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.838691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.838730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.839266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.839800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.839839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.840375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.840919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.840958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.841415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.841881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.841920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.842450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.842892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.842930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.843438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.843945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.843983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.844505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.844962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.844978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.845379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.845916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.845954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.846454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.846884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.846923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.847422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.847952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.847991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.848548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.849061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.849099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.849632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.850139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.850177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.850622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.851078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.851116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.851684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.852178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.852228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.852742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.853233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.853268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.853704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.854147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.854186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.578 [2024-05-15 12:30:25.854747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.855181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.578 [2024-05-15 12:30:25.855229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.578 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.855734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.856265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.856305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.856815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.857320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.857361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.857823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.858247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.858287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.858798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.859308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.859348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.859859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.860297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.860336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.860781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.861214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.861253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.861685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.862212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.862261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.862748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.863187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.863238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.863756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.864284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.864325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.864786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.865219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.865259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.865737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.866267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.866307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.866789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.867222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.867261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.867760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.868239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.868285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.868823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.869341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.869380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.869911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.870344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.870361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.870838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.871344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.871384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.871820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.872326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.872365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.872910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.873416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.873456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.873973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.874389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.874428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.874868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.875372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.875411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.875843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.876322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.876363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.876824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.877322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.877339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.877751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.878261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.878306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.878737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.879151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.879200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.879712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.880231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.880272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.880830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.881337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.881377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.881838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.882318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.882358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.882799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.883228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.883268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.883769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.884226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.884266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.579 qpair failed and we were unable to recover it. 00:28:57.579 [2024-05-15 12:30:25.884699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.579 [2024-05-15 12:30:25.885213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.885253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.885745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.886228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.886268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.886814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.887242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.887282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.887774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.888256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.888296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.888847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.889356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.889397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.889934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.890338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.890378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.890883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.891412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.891451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.892009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.892516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.892556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.893092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.893525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.893565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.894053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.894560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.894600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.895140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.895662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.895702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.896269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.896746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.896762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.897228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.897632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.897649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.898091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.898491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.898509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.898973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.899432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.899448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.899912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.900302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.900319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.900762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.901219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.901236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.901722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.902203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.902221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.902686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.903061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.903077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.903516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.903951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.903968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.904429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.904889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.904906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.905393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.905874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.905890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.906359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.906855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.906871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.907342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.907841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.907857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.908294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.908757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.908773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.909180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.909646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.909663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.910147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.910536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.910553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.911003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.911367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.911387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.911854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.912322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.912339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.580 [2024-05-15 12:30:25.912740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.913202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.580 [2024-05-15 12:30:25.913219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.580 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.913707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.914197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.914214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.914677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.915154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.915171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.915663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.916145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.916162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.916533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.916970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.916986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.917455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.917958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.917975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.918415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.918873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.918890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.919337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.919719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.919735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.920195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.920653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.920669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.921056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.921497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.921514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.921971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.922416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.922432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.922894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.923303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.923320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.923802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.924288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.924306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.924744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.925206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.925223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.925661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.926118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.926135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.926623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.927062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.927084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.927553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.928050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.928067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.928531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.928933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.928950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.929413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.929870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.929886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.930371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.930873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.930890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.931394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.931846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.931862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.932250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.932641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.932658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.933103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.933556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.933573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.934032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.934432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.934449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.934927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.935413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.935429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.935831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.936280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.936319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.936792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.937234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.937275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.937759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.938185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.938233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.938725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.939233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.939273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.939756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.940256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.940296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.581 qpair failed and we were unable to recover it. 00:28:57.581 [2024-05-15 12:30:25.940744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.581 [2024-05-15 12:30:25.941206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.941247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.941663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.942162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.942211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.942741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.943266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.943305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.943863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.944309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.944349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.944832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.945328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.945368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.945901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.946324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.946363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.946820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.947279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.947319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.947851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.948350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.948366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.948777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.949254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.949293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.949725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.950247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.950287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.950790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.951291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.951330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.951865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.952289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.952329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.952841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.953343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.953383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.953847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.954349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.954389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.954926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.955402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.955441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.955974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.956496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.956536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.956941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.957394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.957433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.957871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.958371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.958411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.958946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.959450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.959494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.959921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.960422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.960481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.960918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.961348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.961388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.961922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.962374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.962413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.962926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.963400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.963439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.582 qpair failed and we were unable to recover it. 00:28:57.582 [2024-05-15 12:30:25.963849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.582 [2024-05-15 12:30:25.964244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.964283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.964742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.965232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.965249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.965694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.966116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.966155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.966702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.967246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.967287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.967742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.968223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.968264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.968766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.969293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.969334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.969837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.970355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.970372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.970770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.971270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.971310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.971836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.972319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.972359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.972882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.973409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.973449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.973985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.974362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.974402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.974915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.975418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.975471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.975937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.976448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.976488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.977011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.977515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.977561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.977998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.978425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.978464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.978977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.979483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.979523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.979993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.980475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.980514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.980982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.981372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.981412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.981908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.982412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.982452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.982850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.983318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.983335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.983820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.984330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.984370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.984903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.985379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.985396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.985811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.986278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.986317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.986854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.987364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.987405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.987933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.988439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.988479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.989019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.989493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.989550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.990064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.990515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.990555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.991098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.991525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.991565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.992085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.992589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.992630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.993166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.993686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.993725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.583 qpair failed and we were unable to recover it. 00:28:57.583 [2024-05-15 12:30:25.994188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.583 [2024-05-15 12:30:25.994682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.994721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:25.995246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.995776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.995815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:25.996295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.996765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.996804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:25.997341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.997836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.997875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:25.998426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.998934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:25.998972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:25.999511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.000022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.000062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.000582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.001089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.001128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.001585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.002093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.002132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.002701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.003105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.003144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.003620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.004101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.004140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.004674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.005215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.005255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.005796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.006277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.006317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.006861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.007375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.007416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.007868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.008348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.008388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.008890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.009419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.009459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.010019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.010485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.010524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.011034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.011553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.011593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.012109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.012612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.012652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.013207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.013711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.013749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.014264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.014794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.014833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.015396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.015833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.015872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.016368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.016795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.016834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.017328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.017857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.017897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.018462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.018896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.018948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.019395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.019786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.019824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.020318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.020802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.020840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.021344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.021873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.021912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.022470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.022898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.022938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.023460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.023833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.023872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.024336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.024783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.584 [2024-05-15 12:30:26.024822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.584 qpair failed and we were unable to recover it. 00:28:57.584 [2024-05-15 12:30:26.025286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.025708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.025746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.026251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.026778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.026816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.027325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.027789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.027828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.028362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.028860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.028899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.029445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.029950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.029995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.030452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.030959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.030998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.031534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.032039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.032077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.032617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.033113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.033152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.033724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.034219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.034259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.034696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.035105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.035144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.035667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.036172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.036230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.036793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.037243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.037283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.037822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.038361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.038401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.038917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.039423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.039472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.039887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.040392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.040437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.040975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.041485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.041525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.042070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.042554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.042594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.043138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.043654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.043694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.044145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.044645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.044686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.045213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.045725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.045764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.046269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.046754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.046792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.047335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.047844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.047891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.048340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.048729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.048769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.049299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.049825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.049864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.050372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.050892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.050931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.051471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.051876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.051892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.052356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.052805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.052844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.053402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.053856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.053895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.054429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.054964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.055003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.585 [2024-05-15 12:30:26.055537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.056059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.585 [2024-05-15 12:30:26.056098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.585 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.056602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.057111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.057150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.057698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.058204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.058243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.058784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.059267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.059307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.059780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.060228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.060269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.060784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.061269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.061310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.061764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.062267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.062307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.062844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.063299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.063340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.063784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.064253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.064292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.064818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.065189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.065241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.065685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.066184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.066233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.066762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.067187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.067238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.067741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.068163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.068213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.068708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.069205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.069245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.069761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.070232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.070273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.070793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.071302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.071342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.071874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.072382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.072399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.072777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.073283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.073324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.073801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.074283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.074324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.074816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.075189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.075240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.075756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.076017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.076035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.076529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.076936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.076975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.077487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.077964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.077981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.078426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.078868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.078911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.079297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.079738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.079777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.080141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.080600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.586 [2024-05-15 12:30:26.080640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.586 qpair failed and we were unable to recover it. 00:28:57.586 [2024-05-15 12:30:26.081156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.081686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.081726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.082188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.082623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.082662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.083095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.083505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.083545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.084046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.084553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.084593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.085105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.085585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.085625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.086097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.086517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.086557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.087043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.087468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.087508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.087926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.088387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.088428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.088878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.089305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.089345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.089863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.090367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.090406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.090897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.091165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.091185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.091408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.091870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.091909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.092361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.092839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.092878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.093377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.093822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.093860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.094360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.094802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.094842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.095351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.095803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.095842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.096293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.096771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.096809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.097260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.097700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.097738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.098187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.098695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.098734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.099259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.099688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.099704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.100107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.100606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.587 [2024-05-15 12:30:26.100647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.587 qpair failed and we were unable to recover it. 00:28:57.587 [2024-05-15 12:30:26.101182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.101424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.101441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.101894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.102309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.102326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.102721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.103090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.103107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.103472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.103926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.103943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.104429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.104897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.104936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.105442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.105945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.105983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.106415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.106917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.106955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.107494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.107979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.108018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.108555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.109031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.109070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.109579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.110038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.110076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.110517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.110953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.110991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.111423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.111923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.111961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.112483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.113009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.113047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.852 [2024-05-15 12:30:26.113580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.114059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.852 [2024-05-15 12:30:26.114075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.852 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.114546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.114976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.115014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.115407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.115872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.115911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.116372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.116799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.116838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.117291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.117748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.117786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.118256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.118688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.118704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.119160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.119664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.119681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.120185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.120649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.120688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.121175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.121641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.121681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.122246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.122750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.122789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.123272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.123712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.123751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.124290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.124827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.124866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.125291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.125789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.125827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.126331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.126755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.126793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.127307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.127812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.127850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.128341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.128771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.128809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.129346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.129844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.129893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.130342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.130811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.130850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.131296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.131787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.131826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.132366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.132798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.132836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.133270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.133712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.133751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.134286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.134725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.134764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.135261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.135754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.135771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.136241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.136749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.136788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.137334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.137840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.137878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.138418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.138852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.138890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.139349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.139861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.139899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.140419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.140945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.140989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.141538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.141996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.142034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.142563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.143069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.143108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.853 qpair failed and we were unable to recover it. 00:28:57.853 [2024-05-15 12:30:26.143667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.853 [2024-05-15 12:30:26.144161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.144209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.144646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.145174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.145222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.145763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.146203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.146220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.146702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.147198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.147215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.147680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.148139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.148156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.148666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.149108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.149124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.149545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.150032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.150049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.150493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.150876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.150893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.151348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.151618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.151635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.151804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.152242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.152259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.152504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.152939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.152956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.153395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.153853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.153870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.154260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.154719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.154736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.155101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.155309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.155325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.155763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.156229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.156246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.156631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.157021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.157037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.157444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.157825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.157843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.158299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.158708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.158725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.159188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.159560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.159578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.159897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.160273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.160290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.160592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.160955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.160971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.161343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.161801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.161818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.162276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.162735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.162752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.163133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.163516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.163533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.163919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.164230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.164247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.164708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.165210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.165227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.165664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.166034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.166050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.166517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.166976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.166993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.167351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.167674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.167690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.168071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.168501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.854 [2024-05-15 12:30:26.168518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.854 qpair failed and we were unable to recover it. 00:28:57.854 [2024-05-15 12:30:26.168959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.169275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.169292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.169757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.170153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.170170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.170587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.171049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.171065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.171429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.171885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.171902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.172348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.172737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.172753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.173204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.173651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.173668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.174059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.174503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.174520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.174957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.175412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.175429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.175882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.176398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.176414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.176890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.177283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.177300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.177765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.178271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.178288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.178687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.179144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.179160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.179650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.180060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.180077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.180526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.180964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.180980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.181459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.181952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.181969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.182444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.182864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.182880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.183338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.183792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.183808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.184265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.184645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.184662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.185106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.185481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.185500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.185958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.186411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.186429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.186841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.187300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.187339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.187826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.188349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.188389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.188945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.189447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.189486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.190046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.190534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.190573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.191129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.191625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.191665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.192119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.192643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.192683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.193226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.193729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.193767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.194242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.194745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.194783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.195291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.195704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.195742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.855 [2024-05-15 12:30:26.196260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.196764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.855 [2024-05-15 12:30:26.196802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.855 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.197341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.197855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.197893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.198448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.198951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.198991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.199374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.199789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.199828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.200365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.200905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.200943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.201389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.201799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.201838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.202319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.202809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.202847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.203356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.203809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.203847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.204374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.204814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.204852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.205302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.205754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.205793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.206254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.206661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.206700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.207231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.207740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.207779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.208286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.208682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.208698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.209157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.209728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.209768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.210302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.210836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.210875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.211409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.211871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.211910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.212347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.212863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.212901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.213318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.213793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.213831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.214381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.214893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.214931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.215469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.215907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.215946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.216486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.217030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.217069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.217604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.218121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.218160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.218711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.219214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.219254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.219744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.220275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.220315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.220823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.221329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.221370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.221900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.222380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.222420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.856 qpair failed and we were unable to recover it. 00:28:57.856 [2024-05-15 12:30:26.222938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.223416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.856 [2024-05-15 12:30:26.223456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.223955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.224481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.224521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.225060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.225442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.225482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.225929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.226406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.226446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.226971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.227509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.227549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.228129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.228581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.228621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.229122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.229673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.229713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.230241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.230706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.230744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.231184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.231693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.231733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.232243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.232774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.232812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.233374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.233808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.233846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.234363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.234831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.234870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.235388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.235900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.235938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.236490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.236894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.236910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.237356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.237840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.237885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.238389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.238771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.238809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.239346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.239782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.239820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.240290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.240711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.240750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.241265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.241784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.241823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.242348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.242877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.242916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.243424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.243908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.243947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.244470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.244932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.244971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.245485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.245970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.246009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.246518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.247047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.247086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.247577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.247998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.248017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.248459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.248992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.249031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.249537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.250003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.250042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.857 [2024-05-15 12:30:26.250581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.251042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.857 [2024-05-15 12:30:26.251081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.857 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.251610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.252102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.252141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.252658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.253169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.253218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.253701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.254216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.254256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.254783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.255215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.255254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.255720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.256125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.256141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.256623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.257026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.257065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.257617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.258160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.258210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.258606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.259124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.259176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.259684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.260172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.260221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.260760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.261267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.261307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.261759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.262246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.262285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.262829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.263252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.263268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.263674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.264183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.264230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.264692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.265170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.265232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.265750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.266172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.266222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.266738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.267184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.267234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.267757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.268294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.268311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.268766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.269301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.269341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.269898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.270408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.270447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.270903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.271375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.271415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.271957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.272402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.272448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.272942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.273501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.273518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.273928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.274394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.274433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.274971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.275479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.275519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.275983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.276487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.276527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.277063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.277577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.277617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.278156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.278679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.278719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.279256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.279673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.279712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.280212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.280738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.280776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.858 qpair failed and we were unable to recover it. 00:28:57.858 [2024-05-15 12:30:26.281329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.281775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.858 [2024-05-15 12:30:26.281814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.282339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.282868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.282907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.283457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.283968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.284007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.284551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.285034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.285072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.285626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.286074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.286113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.286636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.287101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.287140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.287640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.288167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.288216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.288755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.289262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.289302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.289836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.290351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.290391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.290925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.291450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.291490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.291931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.292354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.292394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.292905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.293343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.293383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.293899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.294334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.294373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.294873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.295402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.295443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.296001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.296512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.296552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.297085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.297603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.297644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.298210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.298694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.298733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.299282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.299750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.299789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.300305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.300783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.300828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.301328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.301857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.301896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.302435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.302933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.302972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.303478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.303983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.304024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.304355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.304774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.304812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.305306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.305839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.305878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.306439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.306932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.306970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.307511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.308019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.308058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.308598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.309107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.309145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.309714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.310185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.310236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.310757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.311266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.311306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.311828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.312261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.312301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.859 qpair failed and we were unable to recover it. 00:28:57.859 [2024-05-15 12:30:26.312756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.313248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.859 [2024-05-15 12:30:26.313265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.313655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.314159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.314208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.314741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.315221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.315261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.315808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.316288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.316327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.316821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.317243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.317285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.317746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.318210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.318250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.318715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.319145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.319183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.319648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.320072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.320111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.320614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.321097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.321135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.321673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.322177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.322227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.322755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.323279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.323318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.323875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.324380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.324420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.324880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.325345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.325385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.325920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.326373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.326413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.326947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.327488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.327527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.328065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.328603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.328642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.329167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.329674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.329713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.330273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.330765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.330804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.331363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.331814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.331852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.332388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.332878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.332917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.333475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.333956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.333994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.334436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.334916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.334954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.335463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.335986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.336024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.336589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.336961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.336999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.337538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.338075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.338114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.338642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.339092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.339130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.339657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.340164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.340213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.340750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.341262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.341301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.341777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.342290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.342338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.342845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.343276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.343316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.860 qpair failed and we were unable to recover it. 00:28:57.860 [2024-05-15 12:30:26.343793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.860 [2024-05-15 12:30:26.344284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.344324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.344861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.345290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.345327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.345765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.346184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.346232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.346766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.347264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.347303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.347827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.348352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.348391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.348944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.349424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.349464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.349974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.350400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.350439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.350953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.351463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.351502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.352034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.352541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.352581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.353088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.353619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.353665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.354115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.354550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.354590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.355126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.355681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.355720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.356257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.356692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.356731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.357240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.357751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.357790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.358326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.358836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.358874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.359333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.359835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.359873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.360411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.360930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.360968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.361506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.362011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.362049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.362506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.363013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.363051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.363517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.364025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.364064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.364600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.365068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.365106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.365658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.366041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.366080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.366619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.367078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.367116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.367559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.367988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.368027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.368464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.368849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.368888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.861 [2024-05-15 12:30:26.369417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.369906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.861 [2024-05-15 12:30:26.369944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.861 qpair failed and we were unable to recover it. 00:28:57.862 [2024-05-15 12:30:26.370501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.371001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.371018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.862 qpair failed and we were unable to recover it. 00:28:57.862 [2024-05-15 12:30:26.371502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.371985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.372002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.862 qpair failed and we were unable to recover it. 00:28:57.862 [2024-05-15 12:30:26.372475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.372977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.373016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.862 qpair failed and we were unable to recover it. 00:28:57.862 [2024-05-15 12:30:26.373566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.374023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.374040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.862 qpair failed and we were unable to recover it. 00:28:57.862 [2024-05-15 12:30:26.374531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.374995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.862 [2024-05-15 12:30:26.375033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:57.862 qpair failed and we were unable to recover it. 00:28:57.862 [2024-05-15 12:30:26.375566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.376023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.376040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.376428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.376837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.376853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.377318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.377748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.377786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.378280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.378784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.378823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.379385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.379880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.379919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.380422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2299682 Killed "${NVMF_APP[@]}" "$@" 00:28:58.127 [2024-05-15 12:30:26.380943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.380960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.381426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:58.127 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:58.127 [2024-05-15 12:30:26.381931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.381950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:58.127 [2024-05-15 12:30:26.382345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:58.127 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.127 [2024-05-15 12:30:26.382784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.382802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.383271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.383772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.383790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.384288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.384674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.384691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.127 qpair failed and we were unable to recover it. 00:28:58.127 [2024-05-15 12:30:26.385160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.127 [2024-05-15 12:30:26.385678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.385717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.386219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.386733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.386772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.387297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.387807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.387847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.388328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.388814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.388853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.389320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.389785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.389824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.390347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.390854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.390894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.391358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.391775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.391815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2300513 00:28:58.128 [2024-05-15 12:30:26.392269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2300513 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2300513 ']' 00:28:58.128 [2024-05-15 12:30:26.392774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.392797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:58.128 12:30:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.128 [2024-05-15 12:30:26.395317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.395798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.395827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.396252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.396651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.396673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.397143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.397529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.397548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.397982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.398443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.398462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.398953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.399410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.399428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.399843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.400305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.400323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.400724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.401126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.401143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.401548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.401931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.401949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.402419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.402850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.402867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.403332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.403839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.403856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.404325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.404658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.404676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.405111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.405569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.405586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.406075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.406509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.406527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.406988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.407428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.407447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.407842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.408205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.408222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.128 qpair failed and we were unable to recover it. 00:28:58.128 [2024-05-15 12:30:26.408616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.128 [2024-05-15 12:30:26.409048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.409065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.409530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.409992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.410009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.410353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.410682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.410698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.411202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.411590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.411606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.412059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.412545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.412562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.412888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.413286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.413305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.413711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.414112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.414129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.414549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.414986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.415003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.415467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.415826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.415843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.416254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.416710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.416726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.417189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.417668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.417685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.418204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.418615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.418635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.419120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.419530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.419547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.419935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.420323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.420340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.420741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.421145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.421161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.421490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.421869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.421885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.422332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.422717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.422734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.423097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.423567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.423584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.423972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.424408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.424425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.424820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.425280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.425297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.425682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.426125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.426142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.426450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.426891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.426907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.427390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.427820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.427837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.428274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.428753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.428770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.429212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.429565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.429583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.429832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.430284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.129 [2024-05-15 12:30:26.430301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.129 qpair failed and we were unable to recover it. 00:28:58.129 [2024-05-15 12:30:26.430697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.431155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.431173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.431589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.432042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.432059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.432519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.432895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.432911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.433320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.433652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.433669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.434123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.434441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.434458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.434905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.435287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.435304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.435738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.436000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.436016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.436330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.436716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.436754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.437285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.437811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.437852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.438315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.438702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.438741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.439151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.439669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.439708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.440219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.440748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.440788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.441215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.441686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.441726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.442176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.442555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.442595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.443041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.443440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.443482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.443887] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:28:58.130 [2024-05-15 12:30:26.443946] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.130 [2024-05-15 12:30:26.443987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.444418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.444457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.444913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.445186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.445234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.445748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.446249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.446288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.446699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.447110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.447148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.447642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.448161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.448211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.448718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.449138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.449177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.449469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.449961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.450000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.450378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.450855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.450892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.451321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.451743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.451781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.452235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.452682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.130 [2024-05-15 12:30:26.452720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.130 qpair failed and we were unable to recover it. 00:28:58.130 [2024-05-15 12:30:26.453214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.453628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.453672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.454147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.454520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.454561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.454826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.455227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.455266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.455702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.456210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.456250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.456683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.457223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.457264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.457704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.458102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.458141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.458437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.458782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.458820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.459109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.459603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.459643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.460039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.460420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.460437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.460893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.461368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.461408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.461826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.462320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.462377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.462674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.463098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.463136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.463634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.464050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.464088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.464594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.465076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.465114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.465624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.466125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.466164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.466601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.467006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.467045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.467476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.467944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.467961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.468337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.468708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.468725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.469188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.469615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.469656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.470106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.470534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.470573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.471084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.471361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.471401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.471835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.472344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.472384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.472820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.473268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.473286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.473668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.474117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.474134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.474610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.474985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.475024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.131 qpair failed and we were unable to recover it. 00:28:58.131 [2024-05-15 12:30:26.475473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.131 [2024-05-15 12:30:26.475895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.475934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.476367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.476864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.476902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.477426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.477848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.477886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.478390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.478669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.478707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.479002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.479482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.479522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.479782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.480214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.480254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.480764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.481259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.481298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.481671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.482172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.482221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.482501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.132 [2024-05-15 12:30:26.482919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.482959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.483466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.483881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.483920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.484422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.484839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.484877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.485310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.485692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.485731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.486212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.486664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.486680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.487165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.487615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.487632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.488023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.488286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.488303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.488693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.489136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.489152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.489640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.490011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.490028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.490455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.490888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.490904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.491357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.491780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.491796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.492228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.492695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.492712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.132 qpair failed and we were unable to recover it. 00:28:58.132 [2024-05-15 12:30:26.493021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.132 [2024-05-15 12:30:26.493466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.493483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.493906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.494272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.494289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.494657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.495012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.495028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.495470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.495908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.495925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.496366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.496814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.496831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.497132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.497609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.497626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.498077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.498523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.498540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.498916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.499345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.499361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.499746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.500201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.500217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.500607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.501055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.501071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.501433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.501873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.501889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.502334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.502700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.502716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.502981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.503426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.503442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.503809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.504238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.504254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.504633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.504937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.504954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.505409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.505856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.505872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.506319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.506640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.506660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.507107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.507571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.507587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.508014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.508384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.508401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.508827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.509269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.509285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.509658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.510025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.510041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.510464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.510907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.510924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.511289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.511732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.511749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.512099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.512471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.512488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.133 [2024-05-15 12:30:26.512879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.513231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.133 [2024-05-15 12:30:26.513248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.133 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.513672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.514115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.514131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.514498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.514942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.514961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.515387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.515736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.515753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.516182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.516564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.516581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.517000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.517380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.517397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.517807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.518258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.518275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.518727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.519115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.519131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.519467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.519919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.519935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.520381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.520699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.520716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.521161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.521526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.521543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.521918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.522362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.522378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.522802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.523246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.523262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.523635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.523951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.523967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.524392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.524814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.524830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.525122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.525574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.525590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.525976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.526147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.526163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.526488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.526837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.526853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.527297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.527757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.527774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.528069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.528516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.528533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.528911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.529213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.529230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.529630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.530027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.530043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.530427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.530851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.530867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.531295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.531651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.531667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.532092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.532487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.532503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.532924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.533281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.533297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.533742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.534096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.534112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.534485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.534933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.534949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.134 qpair failed and we were unable to recover it. 00:28:58.134 [2024-05-15 12:30:26.535329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.535724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.134 [2024-05-15 12:30:26.535740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.536061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.536442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.536459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.536856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.537220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.537236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.537609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.537989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.135 [2024-05-15 12:30:26.538025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.538041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.538439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.538836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.538852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.539247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.539691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.539708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.540006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.540429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.540445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.540799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.541173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.541196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.541618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.541989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.542005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.542451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.542841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.542857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.543240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.543631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.543647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.544071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.544461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.544478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.544921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.545318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.545335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.545807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.546228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.546245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.546450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.546843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.546861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.547263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.547662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.547679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.548076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.548456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.548473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.548800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.549002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.549018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.549413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.549784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.549800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.550163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.550598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.550615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.550971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.551100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.551116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.551516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.551953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.551969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.552429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.552825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.552841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.553264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.553633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.553649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.554073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.554447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.554463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.554661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.554981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.554997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.555373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.555751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.555767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.135 qpair failed and we were unable to recover it. 00:28:58.135 [2024-05-15 12:30:26.556195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.556642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.135 [2024-05-15 12:30:26.556659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.556977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.557285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.557301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.557732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.558100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.558116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.558496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.558800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.558815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.559259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.559581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.559597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.559970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.560211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.560227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.560535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.560978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.560994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.561368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.561824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.561840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.562154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.562586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.562606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.562988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.563354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.563371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.563771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.564135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.564152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.564555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.564928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.564944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.565368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.565653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.565669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.566057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.566520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.566536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.566920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.567295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.567311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.567683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.568033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.568051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.568447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.568841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.568857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.569296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.569688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.569705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.569839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.570127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.570145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.570556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.570943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.570959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.571336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.571783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.571805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.572007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.572432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.572455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.572834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.573277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.573299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.573682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.574129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.574148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.574624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.575039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.575057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.575484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.575854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.575871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.576243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.576617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.576634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.577060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.577182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.577204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.577658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.578026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.578043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.136 qpair failed and we were unable to recover it. 00:28:58.136 [2024-05-15 12:30:26.578427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.136 [2024-05-15 12:30:26.578843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.578860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.579214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.579582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.579600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.580025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.580216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.580234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.580606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.581075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.581093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.581398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.581791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.581806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.582156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.582546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.582562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.582939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.583362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.583378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.583826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.584096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.584112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.584557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.584891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.584907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.585354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.585716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.585731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.586205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.586635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.586655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.586960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.587407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.587425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.587821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.588187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.588208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.588603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.589031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.589048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.589320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.589696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.589713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.590161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.590537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.590554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.590911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.591301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.591318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.591766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.591952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.591968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.592338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.592810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.592826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.593215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.593659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.593675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.594085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.594501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.594518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.594990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.595430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.595446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.595896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.596342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.596359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.596783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.597151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.597167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.597603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.598070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.137 [2024-05-15 12:30:26.598086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.137 qpair failed and we were unable to recover it. 00:28:58.137 [2024-05-15 12:30:26.598535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.598979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.598995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.599367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.599744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.599760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.600207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.600652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.600669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.601112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.601518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.601535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.601960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.602388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.602404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.602765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.602929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.602945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.603369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.603717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.603733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.604172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.604579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.604598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.604978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.605420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.605436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.605813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.606004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.606020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.606458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.606905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.606925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.607299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.607747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.607765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.607916] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.138 [2024-05-15 12:30:26.607945] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.138 [2024-05-15 12:30:26.607955] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.138 [2024-05-15 12:30:26.607964] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.138 [2024-05-15 12:30:26.607972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.138 [2024-05-15 12:30:26.608092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:58.138 [2024-05-15 12:30:26.608263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.608216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:58.138 [2024-05-15 12:30:26.608304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:58.138 [2024-05-15 12:30:26.608437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.608305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:58.138 [2024-05-15 12:30:26.608454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.608924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.609335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.609353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.609751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.610202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.610219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.610593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.611042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.611059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.611528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.611971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.611988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.612251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.612701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.612718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.613145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.613556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.613572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.613949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.614395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.614412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.614862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.615308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.615325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.615751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.616116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.616133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.616582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.616948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.616965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.617329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.617775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.617791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.618196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.618572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.618589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.619017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.619462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.619479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.619903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.620275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.620293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.138 qpair failed and we were unable to recover it. 00:28:58.138 [2024-05-15 12:30:26.620600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.138 [2024-05-15 12:30:26.620953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.620969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.621347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.621797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.621814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.622263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.622684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.622703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.623148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.623503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.623520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.623971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.624359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.624377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.624748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.625169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.625187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.625601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.626044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.626062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.626486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.626851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.626869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.627197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.627660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.627678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.628105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.628388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.628407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.628691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.629159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.629177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.629420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.629811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.629828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.630207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.630506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.630523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.630878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.631268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.631284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.631672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.632093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.632110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.632490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.632843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.632859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.633084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.633519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.633537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.633914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.634266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.634283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.634513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.635005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.635021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.635473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.635929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.635946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.636351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.636794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.636811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.637283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.637420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.637436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.637822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.638288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.638307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.638746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.639181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.639202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.639642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.640088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.640105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.640557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.640998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.641014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.641553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.642036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.642056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.642491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.642888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.642905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.643218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.643424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.643440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.643865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.644230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.139 [2024-05-15 12:30:26.644248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.139 qpair failed and we were unable to recover it. 00:28:58.139 [2024-05-15 12:30:26.644558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.645023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.645039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.140 qpair failed and we were unable to recover it. 00:28:58.140 [2024-05-15 12:30:26.645495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.645666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.645682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.140 qpair failed and we were unable to recover it. 00:28:58.140 [2024-05-15 12:30:26.646111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.646554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.646572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.140 qpair failed and we were unable to recover it. 00:28:58.140 [2024-05-15 12:30:26.646939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.647308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.647325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.140 qpair failed and we were unable to recover it. 00:28:58.140 [2024-05-15 12:30:26.647656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.648026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.648042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.140 qpair failed and we were unable to recover it. 00:28:58.140 [2024-05-15 12:30:26.648508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.648901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.648919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.140 qpair failed and we were unable to recover it. 00:28:58.140 [2024-05-15 12:30:26.649351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.649732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.140 [2024-05-15 12:30:26.649751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.140 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.650182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.650564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.650581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.651007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.651377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.651394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.651788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.652106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.652121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.652337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.652757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.652777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.653260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.653717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.653736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.654185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.654557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.654573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.655024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.655407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.655424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.655812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.656186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.656206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.656404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.656827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.656843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.657275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.657677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.657697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.658052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.658414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.658431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.658802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.658998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.659015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.659369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.659725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.659741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-05-15 12:30:26.660197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-05-15 12:30:26.660650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.660666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.661036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.661398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.661415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.661783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.662003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.662019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.662456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.662890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.662907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.663256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.663696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.663713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.664161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.664542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.664558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.665006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.665450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.665466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.665897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.666316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.666332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.666794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.667168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.667184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.667611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.668004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.668020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.668326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.668772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.668788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.669164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.669537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.669554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.670014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.670433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.670450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.670825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.671244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.671260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.671609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.672052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.672068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.672461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.672827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.672843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.673290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.673729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.673745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.674122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.674321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.674337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.674748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.675217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.675233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.675700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.676048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.676064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.676437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.676888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.676904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.677349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.677795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.677811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.678247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.678603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.678619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.678928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.679289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.679305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.679750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.680196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.680212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.680606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.680975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.680991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.681302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.681654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.681670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.682078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.682495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.682514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.682925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.683313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.683332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-05-15 12:30:26.683687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.684013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-05-15 12:30:26.684028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.684474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.684840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.684857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.685229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.685602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.685618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.686012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.686458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.686476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.686840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.687284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.687301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.687746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.688099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.688115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.688492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.688938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.688955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.689131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.689449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.689466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.689918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.690338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.690355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.690732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.691100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.691116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.691557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.691936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.691952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.692380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.692829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.692845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.693237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.693594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.693610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.694005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.694428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.694445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.694657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.695094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.695109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.695554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.695999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.696016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.696454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.696821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.696837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.697286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.697668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.697684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.698128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.698526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.698543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.698919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.699296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.699313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.699760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.700204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.700221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.700654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.701100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.701116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.701565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.702011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.702027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.702458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.702820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.702836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.703283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.703637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.703653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.704048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.704367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.704384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.704831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.705202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.705219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.705645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.706072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.706088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.706555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.706949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.706965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.707315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.707684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.707700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-05-15 12:30:26.708143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-05-15 12:30:26.708528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.708545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.708967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.709337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.709353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.709796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.710239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.710256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.710649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.711015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.711031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.711455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.711845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.711861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.712307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.712754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.712770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.713096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.713475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.713492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.713810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.714196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.714213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.714660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.715106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.715122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.715471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.715845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.715861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.716254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.716699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.716714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.717076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.717443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.717473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.717855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.718300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.718316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.718760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.719146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.719162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.719532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.719951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.719967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.720414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.720883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.720900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.721301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.721698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.721714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.722097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.722457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.722474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.722897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.723252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.723268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.723647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.724089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.724105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.724550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.724905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.724920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.725304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.725680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.725696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.726141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.726559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.726575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.726892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.727334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.727350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.727794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.728163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.728179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.728642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.728935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.728951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.729325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.729780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.729796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.730240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.730682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.730698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.731138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.731511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.731530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-05-15 12:30:26.731929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.732375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-05-15 12:30:26.732392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.732860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.733287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.733304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.733660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.734098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.734115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.734469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.734910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.734926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.735352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.735801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.735818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.736175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.736618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.736635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.737010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.737461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.737478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.737874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.738235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.738252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.738703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.739071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.739087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.739533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.739958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.739978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.740427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.740866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.740883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.741303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.741612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.741628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.742059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.742260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.742276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.742687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.743111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.743128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.743448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.743827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.743843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.744148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.744521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.744538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.744900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.745287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.745304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.745579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.745942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.745960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.746273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.746641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.746658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.747105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.747426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.747445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.747644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.748029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.748046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.748441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.748796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.748812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.749188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.749562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.749579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.749953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.750097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.750113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.750529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.750956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.750973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.751308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.751687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.751704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-05-15 12:30:26.752075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.752453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-05-15 12:30:26.752470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.753287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.753728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.753745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.754053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.754412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.754429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.754802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.755162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.755181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.755360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.755717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.755733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.756112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.756499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.756515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.756825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.757249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.757266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.757433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.757800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.757817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.758293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.758717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.758733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.759093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.759534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.759550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.759922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.760294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.760310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.760756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.761129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.761145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.761511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.761836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.761852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.762232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.762533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.762549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.762724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.762854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.762870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.763251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.763547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.763563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.763876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.764225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.764246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.764642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.764999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.765015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.765408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.765848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.765865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.766230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.766545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.766561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.766937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.767224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.767241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.767607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.767950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.767967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.768273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.768564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.768581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.768876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.769243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.769260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.769625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.769915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.769932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.770225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.770582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.770598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.770949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.771335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.771352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.771710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.772161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.772177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-05-15 12:30:26.772571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.772936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-05-15 12:30:26.772952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.773248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.773621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.773638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.773948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.774394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.774410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.774779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.775152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.775168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.775543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.775839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.775856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.776286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.776653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.776669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.777050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.777402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.777419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.777843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.778210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.778227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.778685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.778965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.778982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.779346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.779770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.779786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.780156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.780600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.780617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.780938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.781357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.781374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.781800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.782162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.782178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.782535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.782895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.782911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.783219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.783575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.783592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.783949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.784381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.784397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.784688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.785133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.785148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.785438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.785861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.785877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.786244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.786666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.786682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.787001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.787361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.787378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.787744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.788061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.788077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.788427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.788736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.788754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.789200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.789500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.789517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.789901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.790322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.790338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.790739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.791182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.791204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.791603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.791976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.791992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.792364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.792722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.792738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.793185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.793555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.793572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.793955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.794319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.794336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-05-15 12:30:26.794652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.795023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-05-15 12:30:26.795040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.795475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.795894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.795910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.796362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.796729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.796746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.797102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.797251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.797268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.797640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.798064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.798080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.798506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.798949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.798965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.799286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.799657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.799673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.800055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.800501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.800518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.800869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.801228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.801245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.801691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.802111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.802127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.802505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.802871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.802887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.803179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.803565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.803581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.803949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.804371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.804388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.804691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.805136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.805153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.805469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.805889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.805905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.806330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.806715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.806731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.806943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.807384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.807401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.807774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.808212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.808229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.808675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.809040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.809057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.809368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.809830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.809847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.810287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.810731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.810747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.811187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.811562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.811578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-05-15 12:30:26.811947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-05-15 12:30:26.812255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.812272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.812699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.812985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.813001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.813313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.813511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.813527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.813833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.814282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.814299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.814604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.814972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.814988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.815317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.815757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.815773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.816224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.816587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.816605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.816969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.817333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.817350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.817798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.817997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.818013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.818381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.818800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.818816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.819196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.819637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.819654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.820075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.820495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.820512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.820981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.821348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.821365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.821735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.822179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.822199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.822637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.822999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.823015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.823383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.823828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.823844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.824045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.824428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.824444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.824755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.825121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.825137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.825586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.826026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.826042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.826503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.826867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.826883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.827252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.827670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.827686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-05-15 12:30:26.827907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-05-15 12:30:26.828274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.828291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.828726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.829038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.829054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.829504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.829806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.829822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.829988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.830344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.830360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.830804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.831241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.831258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.831696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.832118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.832134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.832514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.832933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.832950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.833346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.833711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.833727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.834141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.834488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.834504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.834951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.835323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.835340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.835649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.836021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.836038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.836402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.836700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.836716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.837139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.837472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.837489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.837853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.838165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.838181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.838577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.839021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.839037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.839355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.839776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.839792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.840164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.840529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.840545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.840968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.841265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.841282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.841578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.841998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.842015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-05-15 12:30:26.842388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.842837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-05-15 12:30:26.842853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.414 [2024-05-15 12:30:26.843165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.843485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.843501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-05-15 12:30:26.843929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.844374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.844390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-05-15 12:30:26.844745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.845201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.845218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-05-15 12:30:26.845518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.845911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.845927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-05-15 12:30:26.846302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.846722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.846741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-05-15 12:30:26.847105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.847498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.847514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-05-15 12:30:26.847942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.848326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.848343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-05-15 12:30:26.848624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.848914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-05-15 12:30:26.848930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.849213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.849578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.849594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.849949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.850303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.850319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.850704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.851017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.851033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.851479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.851775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.851791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.852147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.852511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.852527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.852885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.853330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.853347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.853653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.854011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.854029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.854345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.854695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.854711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.855028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.855497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.855514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.855881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.856327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.856343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.856707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.857069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.857085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.857443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.857815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.857832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.858139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.858512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.858529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.858916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.859232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-05-15 12:30:26.859249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-05-15 12:30:26.859645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.859923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.859940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.860385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.860762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.860778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.861086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.861391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.861410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.861868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.862156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.862172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.862603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.862972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.862988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.863195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.863567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.863583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.863981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.864343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.864360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.864755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.865202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.865218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.865528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.865670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.865686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.865967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.866327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.866344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.866571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.867003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.867019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.867325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.867623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.867639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.867997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.868347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.868367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.868528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.868900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.868917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.869306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.869773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.869790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.870084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.870456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.870472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.870897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.871340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.871357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.871779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.872206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.872222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.872617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.872764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.872780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-05-15 12:30:26.873100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-05-15 12:30:26.873452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.873468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.873889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.874264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.874280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.874580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.874972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.874988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.875372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.875817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.875833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.876268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.876718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.876734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.877059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.877423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.877440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.877799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.878222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.878238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.878606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.878970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.878986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.879411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.879772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.879788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.880244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.880637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.880653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.881023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.881388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.881405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.881694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.882059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.882076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.882504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.882892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.882908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.883214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.883607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.883623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.883976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.884359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.884375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.884824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.885197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.885213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.885663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.886083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.886100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.886481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.886697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.886713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.887129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.887507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.887524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-05-15 12:30:26.887891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.888269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-05-15 12:30:26.888285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.888707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.889104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.889121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.889546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.889848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.889864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.890159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.890480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.890496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.890812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.891253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.891269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.891638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.891986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.892002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.892373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.892501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.892518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.892886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.893327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.893344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.893744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.894137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.894157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.894525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.894826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.894842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.895267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.895709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.895725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.896146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.896502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.896519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.896966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.897337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.897353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.897662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.898054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.898071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.898438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.898816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.898832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.899152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.899599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.899615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.899975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.900348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.900365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.900660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.901022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.901038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.901424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.901733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.901748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.902104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.902424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.902440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.902828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.903122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-05-15 12:30:26.903139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-05-15 12:30:26.903518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.903879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.903896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.904198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.904640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.904656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.905043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.905442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.905459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.905762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.906209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.906226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.906544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.906848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.906864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.907239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.907545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.907561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.907870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.908251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.908268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.908562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.908958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.908974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.909286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.909642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.909658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.910082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.910451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.910468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.910898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.911273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.911289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.911604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.911975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.911992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.912300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.912711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.912727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.913036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.913395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.913412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.913734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.914158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.914174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.914582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.914939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.914955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.915311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.915685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.915702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.915995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.916280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.916296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.916642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.917063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.917079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.917437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.917800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-05-15 12:30:26.917817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-05-15 12:30:26.918262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.918556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.918572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.918967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.919316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.919332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.919706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.920070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.920086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.920472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.920863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.920879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba8000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.921274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.921672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.921692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2211560 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.922082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.922458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.922475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.922833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.923269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.923288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.923651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.924068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.924086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.924459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.924811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.924826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.925186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.925549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.925563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.925981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.926265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-05-15 12:30:26.926279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-05-15 12:30:26.926698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.685 [2024-05-15 12:30:26.927136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.685 [2024-05-15 12:30:26.927151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.685 qpair failed and we were unable to recover it. 00:28:58.685 [2024-05-15 12:30:26.927503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.685 [2024-05-15 12:30:26.927812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.685 [2024-05-15 12:30:26.927826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.685 qpair failed and we were unable to recover it. 00:28:58.685 [2024-05-15 12:30:26.928241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.685 [2024-05-15 12:30:26.928595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.685 [2024-05-15 12:30:26.928610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.685 qpair failed and we were unable to recover it. 00:28:58.685 [2024-05-15 12:30:26.928909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.685 [2024-05-15 12:30:26.929257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.685 [2024-05-15 12:30:26.929271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.685 qpair failed and we were unable to recover it. 00:28:58.685 [2024-05-15 12:30:26.929588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.929959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.929973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.930360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.930704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.930718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.931157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.931511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.931523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.931872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.932228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.932240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.932610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.932973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.932984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.933293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.933646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.933659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.934039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.934404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.934416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.934708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.935119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.935131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.935434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.935787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.935799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.936171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.936615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.936628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.937042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.937406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.937419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.937720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.938016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.938028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.938468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.938749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.938762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.939050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.939407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.939420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.939806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.940100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.940112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.940461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.940762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.940774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.941194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.941554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.941566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.941993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.942355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.942368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.942686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.942982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.942994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.943342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.943649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.943660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.943954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.944299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.944311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.944655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.945018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.945030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.945478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.945754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.945766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.946116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.946504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.946516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.946863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.947172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.947184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.947545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.947900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.947912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.948279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.948573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.948585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.949016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.949363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.949375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.949763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.950176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.950187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.686 qpair failed and we were unable to recover it. 00:28:58.686 [2024-05-15 12:30:26.950546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.686 [2024-05-15 12:30:26.950958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.950970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.951182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.951615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.951627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.951968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.952341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.952353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.952717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.953126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.953137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.953453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.953748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.953760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.954171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.954533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.954545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.954901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.955264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.955276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.955688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.956010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.956022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.956381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.956794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.956806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.957233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.957592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.957604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.957957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.958322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.958335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.958705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.959084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.959095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.959530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.959887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.959898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.960198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.960544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.960556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.960904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.961365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.961376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.961746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.962034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.962045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.962412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.962701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.962713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.963126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.963339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.963351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.963764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.964055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.964067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.964483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.964830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.964843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.965130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.965494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.965508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.965865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.966158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.966170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.966525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.966883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.966896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.967309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.967617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.967629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.967821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.968233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.968245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.968603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.968946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.968958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.969248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.969605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.969616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.970041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.970434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.970446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.970800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.971007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.971019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.971376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.971794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.687 [2024-05-15 12:30:26.971806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.687 qpair failed and we were unable to recover it. 00:28:58.687 [2024-05-15 12:30:26.972249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.972662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.972676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.973028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.973326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.973338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.973691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.974101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.974113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.974474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.974822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.974834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.975139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.975494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.975506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.975799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.976128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.976140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.976596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.977008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.977020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.977326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.977684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.977695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.978107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.978520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.978532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.978947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.979309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.979321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.979737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.980072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.980086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.980448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.980865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.980877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.981185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.981485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.981497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.981841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.982132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.982144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.982274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.982550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.982562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.982925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.983300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.983312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.983725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.984026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.984038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.984451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.984809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.984822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.985194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.985556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.985568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.985944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.986291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.986304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.986603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.986894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.986908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.987027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.987366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.987379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.987792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.988152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.988164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.988597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.988887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.988899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.989281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.989715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.989727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.990090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.990410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.990422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.990791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.990923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.990934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.991319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.991687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.991700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.992047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.992427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.992439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.688 qpair failed and we were unable to recover it. 00:28:58.688 [2024-05-15 12:30:26.992785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.688 [2024-05-15 12:30:26.993202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.993214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.993576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.993924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.993936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.994280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.994585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.994597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.994896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.995253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.995265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.995626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.995922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.995934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.996347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.996634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.996646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.996993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.997411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.997423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.997774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.998146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.998158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.998500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.998933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.998945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:26.999358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.999710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:26.999723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.000159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.000304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.000316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.000627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.001040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.001053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.001411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.001706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.001718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.002003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.002342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.002354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.002790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.003138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.003150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.003436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.003875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.003887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.004247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.004660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.004672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.005105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.005465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.005478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.005823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.006116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.006128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.006500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.006877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.006888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.007184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.007532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.007544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.007906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.008296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.008308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.008724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.008858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.008870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.009284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.009699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.009711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.010085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.010442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.010454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.010816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.011194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.011206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.011573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.012021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.012033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.012398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.012758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.012770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.013208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.013560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.013572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.689 qpair failed and we were unable to recover it. 00:28:58.689 [2024-05-15 12:30:27.013867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.014276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.689 [2024-05-15 12:30:27.014288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.014671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.015033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.015044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.015402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.015758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.015770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.016126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.016526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.016539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.016904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.017251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.017263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.017616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.017915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.017927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.018359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.018733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.018745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.019083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.019492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.019504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.019885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.020155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.020167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.020536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.020835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.020847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.021199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.021558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.021570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.021878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.022173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.022185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.022650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.023009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.023021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.023373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.023718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.023731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.024092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.024447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.024459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.024896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.025085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.025096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.025480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.025922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.025934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.026209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.026552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.026564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.026901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.027274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.027286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.027653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.028016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.028028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.028338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.028635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.028647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.028853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.029213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.029231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.029577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.029988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.030000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.030316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.030670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.030682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.690 [2024-05-15 12:30:27.030984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.031256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.690 [2024-05-15 12:30:27.031268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.690 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.031634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.032070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.032082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.032373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.032834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.032846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.033151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.033586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.033598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.033888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.034171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.034182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.034600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.034947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.034959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.035270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.035683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.035695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.036060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.036475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.036487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.036849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.037228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.037240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.037650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.037948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.037959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.038247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.038612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.038624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.038971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.039330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.039342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.039759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.040121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.040133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.040512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.040870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.040882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.041252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.041557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.041569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.042006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.042287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.042299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.042650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.042947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.042959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.043396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.043702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.043714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.044131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.044586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.044599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.044980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.045135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.045147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.045442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.045876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.045888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.046189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.046611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.046623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.046988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.047278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.047290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.047641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.047989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.048001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.048209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.048499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.048511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.048926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.049228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.049240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.049620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.049967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.049978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.050264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.050637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.050649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.051071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.051448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.051460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.691 qpair failed and we were unable to recover it. 00:28:58.691 [2024-05-15 12:30:27.051838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.691 [2024-05-15 12:30:27.052198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.052210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.052492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.052775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.052787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.053161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.053439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.053451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.053735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.054079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.054091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.054471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.054828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.054840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.055138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.055540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.055552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.055924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.056291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.056303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.056739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.057116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.057128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.057481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.057772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.057784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.058223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.058594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.058606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.058942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.059303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.059315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.059671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.060054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.060065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.060417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.060840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.060852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.061290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.061597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.061609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.062056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.062418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.062430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.062740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.063094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.063106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.063454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.063849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.063861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.064248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.064527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.064539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.064840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.065274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.065286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.065578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.066007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.066019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.066389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.066771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.066783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.067133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.067544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.067557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.067911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.068273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.068285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.068635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.068989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.069001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.069378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.069722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.069734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.070048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.070426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.070439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.070873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.071239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.071251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.071558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.071866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.071878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.072267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.072457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.072470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.692 [2024-05-15 12:30:27.072887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.073300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.692 [2024-05-15 12:30:27.073312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.692 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.073692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.073983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.073996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.074284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.074407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.074419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.074788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.075163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.075175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.075462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.075621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.075633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.075774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.076182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.076198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.076470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.076882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.076895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.077276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.077559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.077571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.077879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.078172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.078184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.078463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.078879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.078891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.079209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.079497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.079509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.079772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.080060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.080074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.080422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.080783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.080795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.081181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.081546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.081559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.081861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.082139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.082151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.082528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.082876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.082889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.083238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.083603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.083615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.083896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.084211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.084223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.084638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.085001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.085013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.085428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.085772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.085784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.086158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.086514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.086527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.086894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.087327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.087341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.087630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.088005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.088018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.088368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.088665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.088677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.088946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.089297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.089309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.089746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.090079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.090091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.090393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.090674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.090686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.090988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.091369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.091381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.091689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.092120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.092133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.092551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.092907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.693 [2024-05-15 12:30:27.092919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.693 qpair failed and we were unable to recover it. 00:28:58.693 [2024-05-15 12:30:27.093303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.093650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.093662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.094041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.094434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.094448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.094737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.095032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.095044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.095478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.095832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.095844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.096201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.096555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.096567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.096926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.097286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.097298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.097667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.098033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.098045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.098357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.098702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.098714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.099094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.099396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.099408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.099783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.100199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.100211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.100691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.101068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.101080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.101454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.101765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.101777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.102196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.102578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.102590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.103049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.103400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.103412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.103796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.104098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.104109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.104415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.104724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.104737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.105122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.105471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.105483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.105844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.106223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.106236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.106428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.106789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.106801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.107237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.107618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.107630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.107983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.108355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.108367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.108643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.108938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.108950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.109314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.109726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.109738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.110102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.110449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.110462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.110832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.111241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.111254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.694 qpair failed and we were unable to recover it. 00:28:58.694 [2024-05-15 12:30:27.111555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.111991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.694 [2024-05-15 12:30:27.112003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.112418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.112690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.112702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.113143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.113551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.113564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.113905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.114216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.114228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.114515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.114859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.114871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.115142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.115501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.115513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.115878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.116238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.116250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.116440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.116586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.116598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.116912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.117280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.117292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.117652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.117925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.117937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.118281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.118427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.118439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.118804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.119098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.119110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.119239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.119599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.119611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.120048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.120344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.120356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.120656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.121092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.121104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.121334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.121705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.121717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.122182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.122597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.122609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.122968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.123312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.123324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.123612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.123828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.123839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.124254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.124539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.124551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.124913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.125209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.125221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.125579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.125965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.125976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.126337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.126772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.126784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.127075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.127296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.127308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.127672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.128113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.128125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.128476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.128819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.128831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.129171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.129483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.129495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.129858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.130206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.130218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.130584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.130940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.130952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.131325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.131693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.695 [2024-05-15 12:30:27.131705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.695 qpair failed and we were unable to recover it. 00:28:58.695 [2024-05-15 12:30:27.132081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.132390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.132401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.132763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.133071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.133083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.133444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.133732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.133744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.134180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.134522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.134534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.134910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.135277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.135289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.135580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.135944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.135957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.136288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.136640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.136653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.137071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.137426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.137438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.137729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.138013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.138025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.138318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.138664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.138676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.139055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.139408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.139420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.139802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.140223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.140235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.140587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.140942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.140954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.141304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.141656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.141668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.142028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.142461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.142474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.142752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.143032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.143044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.143333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.143698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.143710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.144093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.144386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.144398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.144748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.144905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.144917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.145212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.145627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.145640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.145951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.146331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.146343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.146630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.147046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.147058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.147202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.147547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.147559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.147909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.148030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.148042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.148385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.148754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.148766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.149121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.149479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.149491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.149841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.150203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.150215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.150506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.150793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.150805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.151223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.151670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.151682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.696 qpair failed and we were unable to recover it. 00:28:58.696 [2024-05-15 12:30:27.151984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.696 [2024-05-15 12:30:27.152287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.152299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.152648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.152991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.153004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.153204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.153561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.153574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.153939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.154351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.154363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.154805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.155092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.155104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.155448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.155881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.155893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.156203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.156552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.156564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.157015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.157385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.157397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.157779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.158157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.158169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.158543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.158839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.158851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.159220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.159560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.159572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.159947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.160374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.160386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.160776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.160985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.160997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.161369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.161645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.161657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.162023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.162372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.162384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.162663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.163034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.163047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.163416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.163827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.163840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.164212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.164570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.164582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.164934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.165291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.165304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.165719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.166078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.166090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.166387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.166687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.166700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.167057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.167333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.167345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.167559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.167918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.167930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.168214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.168673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.168685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.169063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.169206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.169218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.169532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.169943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.169955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.170166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.170475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.170487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.170871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.171224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.171236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.171613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.171916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.171928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.697 qpair failed and we were unable to recover it. 00:28:58.697 [2024-05-15 12:30:27.172345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.172713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.697 [2024-05-15 12:30:27.172725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.173084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.173428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.173440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.173799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.173929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.173941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.174317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.174633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.174646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.175059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.175513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.175525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.175875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.176166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.176178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.176557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.176925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.176937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.177292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.177594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.177606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.177975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.178279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.178292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.178654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.179021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.179033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.179379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.179733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.179745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.180156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.180454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.180466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.180589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.180942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.180954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.181315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.181614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.181626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.181839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.182220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.182232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.182447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.182750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.182762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.183057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.183431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.183444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.183864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.184230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.184242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.184640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.184917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.184929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.185229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.185348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.185362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.185531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.185943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.185955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.186269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.186650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.186662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.187022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.187372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.187384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.187687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.188035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.188047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.188356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.188654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.188666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.189019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.189388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.189400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.189701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.189825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.189837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.190255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.190627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.190640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.190921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.191269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.191281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.698 qpair failed and we were unable to recover it. 00:28:58.698 [2024-05-15 12:30:27.191654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.698 [2024-05-15 12:30:27.192070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.192085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.192371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.192812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.192824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.193308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.193732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.193744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.194055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.194418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.194430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.194792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.195093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.195105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.195536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.195896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.195908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.196266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.196540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.196552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.196878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.197080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.197092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.197446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.197734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.197747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.197944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.198227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.198240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.198533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.198845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.198859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.199170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.199608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.199621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.199920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.200225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.200237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.200548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.200839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.200851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.201070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.201377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.201389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.201739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.202091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.202104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.202395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.202751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.202763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.203040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.203344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.203356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.203712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.203915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.203927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.204344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.204695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.204707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.205176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.205613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.205628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.205921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.206277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.206290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.206770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.207060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.699 [2024-05-15 12:30:27.207072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.699 qpair failed and we were unable to recover it. 00:28:58.699 [2024-05-15 12:30:27.207370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.207713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.207726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.208072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.208370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.208383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.208751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.209111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.209123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.209431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.209714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.209727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.210092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.210386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.210398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.210747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.211103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.211116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.211480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.211860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.211872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.212308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.212647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.212659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.213031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.213325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.213337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.213711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.214098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.214111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.214330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.214692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.214704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.214978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.215407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.215421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.215704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.216066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.216078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.216399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.216553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.216566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.216856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.217152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.217164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.217463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.217764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.217776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.218210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.218565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.218577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.218933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.219289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.219302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.219661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.219961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.219973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.220392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.220678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.220690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.221005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.221435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.221448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.963 [2024-05-15 12:30:27.221594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.222009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.963 [2024-05-15 12:30:27.222021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.963 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.222314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.222672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.222684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.223116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.223531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.223543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.223834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.224208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.224221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.224519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.224954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.224967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.225262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.225644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.225656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.225951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.226298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.226311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.226681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.227063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.227075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.227440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.227804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.227816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.228177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.228589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.228601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.228898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.229273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.229285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.229502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.229913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.229925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.230227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.230583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.230595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.230883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.231298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.231310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.231521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.231873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.231885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.232183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.232527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.232539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.232900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.233205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.233218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.233511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.233779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.233790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.234100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.234246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.234258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.234546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.234836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.234848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.235264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.235581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.235594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.235953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.236303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.236315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.236584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.236880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.236892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.237264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.237583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.237595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.237891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.238163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.238174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.238548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.238910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.238922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.239357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.239791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.239803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.240159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.240463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.240475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.240919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.241269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.241281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.241637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.242046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.242058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.964 qpair failed and we were unable to recover it. 00:28:58.964 [2024-05-15 12:30:27.242363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.964 [2024-05-15 12:30:27.242720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.242732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.243100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.243539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.243551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.243843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.244118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.244130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.244546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.244859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.244871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.245137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.245496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.245508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.245721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.245871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.245883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.246160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.246519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.246531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.246670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.247084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.247096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.247393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.247757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.247770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.248161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.248321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.248333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.248770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.249070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.249082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.249426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.249697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.249709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.250127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.250486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.250498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.250880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.251165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.251176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.251642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.251998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.252010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.252164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.252608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.252620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.252997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.253345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.253358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.253770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.254078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.254090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.254466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.254842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.254854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.255293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.255657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.255669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.255949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.256111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.256122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.256425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.256710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.256722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.256914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.257189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.257204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.257502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.257936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.257947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.258314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.258514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.258526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.258912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.259294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.259306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.259668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.260102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.260114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.260481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.260830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.260842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.261271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.261692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 [2024-05-15 12:30:27.261704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.965 qpair failed and we were unable to recover it. 00:28:58.965 [2024-05-15 12:30:27.262064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.965 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:58.965 [2024-05-15 12:30:27.262424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.262438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:28:58.966 [2024-05-15 12:30:27.262814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:58.966 [2024-05-15 12:30:27.263177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.263194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:58.966 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.966 [2024-05-15 12:30:27.263501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.263948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.263960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.264326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.264702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.264714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.265022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.265398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.265410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.265830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.266145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.266158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.266511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.266700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.266712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.267007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.267356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.267368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.267731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.267978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.267990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.268286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.268571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.268584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.269053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.269363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.269376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.269683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.269982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.269995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.270184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.270418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.270430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.270709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.271005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.271017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.271374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.271675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.271687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.272123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.272429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.272442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.272828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.273210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.273222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.273580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.273936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.273948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.274325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.274680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.274693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.275099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.275396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.275409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.275637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.276050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.276062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.276423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.276629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.276641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.277004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.277374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.277388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.277694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.277932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.277945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.278242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.278603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.278616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.278983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.279397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.279410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.279710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.280008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.280020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.280323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.280636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.280648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.280844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.281065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.966 [2024-05-15 12:30:27.281077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.966 qpair failed and we were unable to recover it. 00:28:58.966 [2024-05-15 12:30:27.281495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.281847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.281859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.282152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.282445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.282457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.282756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.283030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.283042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.283331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.283601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.283613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.283888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.284243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.284255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.284545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.284842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.284854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.285128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.285419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.285433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.285725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.285940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.285952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.286254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.286546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.286558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.286853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.287137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.287149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.287450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.287760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.287772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.288122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.288479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.288492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.288783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.289069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.289081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.289434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.289790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.289803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.290145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.290442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.290455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.290760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.291049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.291061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.291342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.291623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.291636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.291921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.292300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.292313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.292671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.292860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.292872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.293148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.293452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.293465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.293760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.294042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.294054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.294298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.294587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.294599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.294953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.295231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.295243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.295584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.295874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.967 [2024-05-15 12:30:27.295887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.967 qpair failed and we were unable to recover it. 00:28:58.967 [2024-05-15 12:30:27.296189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.296488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.296500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.296843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.297216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.297228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.297583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.297876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.297888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.298182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.298482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.298495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.298805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.299118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.299130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.299417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.299698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.299710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.300058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.300418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.300431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.300766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.301062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.301074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.301382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.301677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.301690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.301855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.302128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.302140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.302360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.302643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.302655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.303017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.303315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.303327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.303628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.303987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.303999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.304355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.304710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.304724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.304872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.305040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.305053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.305336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.305625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.305637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.305774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.306076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.306089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.306399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.306685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.306698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.306997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.307302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.307314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.968 [2024-05-15 12:30:27.307621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.307912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:58.968 [2024-05-15 12:30:27.307925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.308232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.968 [2024-05-15 12:30:27.308377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.308390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.968 [2024-05-15 12:30:27.308600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.308967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.308980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.309344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.309641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.309654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.310007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.310423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.310435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.310725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.311012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.311024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.311312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.311681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.311693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.312060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.312337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.312349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.312714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.313006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.313018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.968 qpair failed and we were unable to recover it. 00:28:58.968 [2024-05-15 12:30:27.313318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.968 [2024-05-15 12:30:27.313622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.313634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.313927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.314218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.314230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.314516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.314971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.314984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.315294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.315632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.315645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.315920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.316198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.316211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.316574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.316850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.316862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.317138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.317497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.317510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.317874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.318218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.318231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.318645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.318962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.318974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.319115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.319397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.319410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.319786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.320059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.320072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.320290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.320586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.320599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.320891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.321254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.321269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.321573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.321939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.321953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.322251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.322543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.322557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.322853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.323205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.323220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.323527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.323903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.323917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.324307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.324599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.324611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.324973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.325347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.325360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.325711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.325925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.325937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.326303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 Malloc0 00:28:58.969 [2024-05-15 12:30:27.326588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.326601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.326973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.327346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.969 [2024-05-15 12:30:27.327359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.327707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:58.969 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.969 [2024-05-15 12:30:27.328082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.328095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.969 [2024-05-15 12:30:27.328391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.328736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.328748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.329109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.329481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.329494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.329851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.330123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.330135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.330429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.330724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.330735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.331105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.331540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.331552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.969 [2024-05-15 12:30:27.331840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.332212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.969 [2024-05-15 12:30:27.332224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.969 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.332608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.332901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.332913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.333268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.333609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.333620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.333892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.334032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.970 [2024-05-15 12:30:27.334275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.334288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.334602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.334882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.334894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.335184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.335524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.335538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.335832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.336040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.336052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.336362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.336708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.336720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.337072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.337428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.337440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.337726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.338031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.338043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.338405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.338762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.338774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.339053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.339464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.339477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.339845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.340195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.340207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.340347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.340721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.340733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.341018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.341402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.341423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.341776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.342199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.342213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.342410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.342578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.342590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.970 [2024-05-15 12:30:27.342895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.970 [2024-05-15 12:30:27.343268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.343281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.970 [2024-05-15 12:30:27.343572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.970 [2024-05-15 12:30:27.343941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.343954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.344369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.344564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.344576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.344889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.345043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.345055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.345347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.345702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.345714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.346071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.346380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.346392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.346756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.347039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.347050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.347471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.347834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.347847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.348151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.348535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.348547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.348833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.349128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.349140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.349450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.349764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.970 [2024-05-15 12:30:27.349775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.970 qpair failed and we were unable to recover it. 00:28:58.970 [2024-05-15 12:30:27.350122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.350474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.350486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.971 [2024-05-15 12:30:27.350900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:58.971 [2024-05-15 12:30:27.351270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.351283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.971 [2024-05-15 12:30:27.351570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.971 [2024-05-15 12:30:27.351983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.351996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.352434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.352741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.352753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.353208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.353543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.353554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.353862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.354215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.354227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.354537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.354975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.354987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.355300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.355646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.355658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.355950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.356225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.356237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.356589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.357005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.357017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.357393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.357806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.357818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.358106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.358460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.358472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.358820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.971 [2024-05-15 12:30:27.359185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.359208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.359498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.971 [2024-05-15 12:30:27.359702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.359715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.971 [2024-05-15 12:30:27.360082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.360437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.360449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.360793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.361149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.361161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.361510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.361906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.361918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5ba0000b90 with addr=10.0.0.2, port=4420 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.362057] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:58.971 [2024-05-15 12:30:27.362264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.971 [2024-05-15 12:30:27.362296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.971 [2024-05-15 12:30:27.364789] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:28:58.971 [2024-05-15 12:30:27.364832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f5ba0000b90 (107): Transport endpoint is not connected 00:28:58.971 [2024-05-15 12:30:27.364876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.971 [2024-05-15 12:30:27.374597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.971 [2024-05-15 12:30:27.374737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.971 [2024-05-15 12:30:27.374758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.971 [2024-05-15 12:30:27.374769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.971 [2024-05-15 12:30:27.374779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.971 [2024-05-15 12:30:27.374800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.971 12:30:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2299913 00:28:58.971 [2024-05-15 12:30:27.384591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.971 [2024-05-15 12:30:27.384757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.971 [2024-05-15 12:30:27.384779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.971 [2024-05-15 12:30:27.384789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.971 [2024-05-15 12:30:27.384798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.971 [2024-05-15 12:30:27.384817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.971 qpair failed and we were unable to recover it. 00:28:58.971 [2024-05-15 12:30:27.394553] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.971 [2024-05-15 12:30:27.394672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.971 [2024-05-15 12:30:27.394690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.971 [2024-05-15 12:30:27.394700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.971 [2024-05-15 12:30:27.394708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.394727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.404590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.404708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.404726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.404735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.404744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.404762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.414662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.414775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.414795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.414804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.414813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.414832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.424668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.424778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.424796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.424805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.424817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.424836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.434654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.434774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.434792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.434801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.434810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.434829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.444719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.444838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.444856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.444865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.444874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.444893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.454760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.454877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.454896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.454906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.454915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.454933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.464744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.465053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.465072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.465081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.465090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.465109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.474765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.474884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.474902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.474911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.474919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.474937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:58.972 [2024-05-15 12:30:27.484848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.972 [2024-05-15 12:30:27.484986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.972 [2024-05-15 12:30:27.485003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.972 [2024-05-15 12:30:27.485013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.972 [2024-05-15 12:30:27.485021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:58.972 [2024-05-15 12:30:27.485040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.972 qpair failed and we were unable to recover it. 00:28:59.231 [2024-05-15 12:30:27.494878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.231 [2024-05-15 12:30:27.495005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.231 [2024-05-15 12:30:27.495022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.231 [2024-05-15 12:30:27.495032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.231 [2024-05-15 12:30:27.495040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.231 [2024-05-15 12:30:27.495059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.231 qpair failed and we were unable to recover it. 00:28:59.231 [2024-05-15 12:30:27.504826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.231 [2024-05-15 12:30:27.504948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.504966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.504975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.504984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.505002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.514888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.515007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.515025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.515035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.515046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.515064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.524939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.525056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.525073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.525082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.525091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.525109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.534954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.535084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.535102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.535112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.535120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.535138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.545012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.545127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.545145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.545154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.545163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.545181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.555019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.555143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.555161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.555171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.555179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.555203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.564988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.565154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.565171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.565181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.565189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.565212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.575103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.575224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.575242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.575252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.575260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.575279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.585114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.585232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.585250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.585260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.585268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.585287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.595106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.595222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.595240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.595249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.595258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.595276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.605112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.605228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.605246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.605259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.605267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.605285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.615413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.615542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.615559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.615569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.615577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.615595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.625291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.625404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.625422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.625431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.625440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.625458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.635284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.232 [2024-05-15 12:30:27.635399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.232 [2024-05-15 12:30:27.635416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.232 [2024-05-15 12:30:27.635426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.232 [2024-05-15 12:30:27.635434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.232 [2024-05-15 12:30:27.635452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.232 qpair failed and we were unable to recover it. 00:28:59.232 [2024-05-15 12:30:27.645482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.645611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.645629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.645638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.645647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.645666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.655338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.655470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.655488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.655497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.655506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.655525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.665386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.665503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.665521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.665531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.665539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.665558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.675341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.675461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.675479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.675488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.675497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.675516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.685418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.685534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.685551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.685561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.685569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.685588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.695447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.695563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.695584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.695593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.695602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.695620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.705469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.705582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.705600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.705610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.705618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.705637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.715459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.715577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.715595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.715604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.715613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.715631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.725459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.725575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.725593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.725602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.725611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.725629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.735533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.735643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.735661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.735671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.735679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.735703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.233 [2024-05-15 12:30:27.745550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.233 [2024-05-15 12:30:27.745666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.233 [2024-05-15 12:30:27.745683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.233 [2024-05-15 12:30:27.745693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.233 [2024-05-15 12:30:27.745701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.233 [2024-05-15 12:30:27.745720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.233 qpair failed and we were unable to recover it. 00:28:59.234 [2024-05-15 12:30:27.755561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.234 [2024-05-15 12:30:27.755676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.234 [2024-05-15 12:30:27.755694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.234 [2024-05-15 12:30:27.755703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.234 [2024-05-15 12:30:27.755711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.234 [2024-05-15 12:30:27.755730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.234 qpair failed and we were unable to recover it. 00:28:59.492 [2024-05-15 12:30:27.765640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.492 [2024-05-15 12:30:27.765773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.492 [2024-05-15 12:30:27.765790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.492 [2024-05-15 12:30:27.765800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.492 [2024-05-15 12:30:27.765808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.492 [2024-05-15 12:30:27.765827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-05-15 12:30:27.775567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.492 [2024-05-15 12:30:27.775684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.492 [2024-05-15 12:30:27.775702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.492 [2024-05-15 12:30:27.775712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.492 [2024-05-15 12:30:27.775720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.492 [2024-05-15 12:30:27.775738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-05-15 12:30:27.785679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.492 [2024-05-15 12:30:27.785839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.492 [2024-05-15 12:30:27.785860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.492 [2024-05-15 12:30:27.785869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.492 [2024-05-15 12:30:27.785878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.492 [2024-05-15 12:30:27.785897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.492 qpair failed and we were unable to recover it. 00:28:59.492 [2024-05-15 12:30:27.795712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.492 [2024-05-15 12:30:27.795828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.492 [2024-05-15 12:30:27.795846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.795855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.795864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.795882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.805727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.805844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.805862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.805871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.805880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.805898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.815708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.815854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.815873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.815883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.815892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.815910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.825774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.825892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.825909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.825919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.825931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.825950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.835804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.835920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.835937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.835947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.835955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.835974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.845770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.845889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.845906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.845915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.845924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.845942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.855849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.855963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.855980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.855990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.855998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.856017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.865926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.866038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.866056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.866065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.866074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.866092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.875910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.876041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.876059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.876069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.876077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.876095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.885882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.885993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.886011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.886020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.886028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.886047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.895977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.896095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.896112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.896122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.896130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.896148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.906009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.493 [2024-05-15 12:30:27.906125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.493 [2024-05-15 12:30:27.906143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.493 [2024-05-15 12:30:27.906152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.493 [2024-05-15 12:30:27.906161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.493 [2024-05-15 12:30:27.906179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.493 qpair failed and we were unable to recover it. 00:28:59.493 [2024-05-15 12:30:27.916023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.916138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.916156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.916165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.916177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.916202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:27.926002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.926124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.926141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.926151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.926159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.926178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:27.936097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.936221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.936238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.936247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.936255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.936274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:27.946108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.946225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.946242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.946252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.946260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.946279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:27.956074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.956208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.956225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.956235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.956243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.956261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:27.966189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.966312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.966330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.966340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.966348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.966367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:27.976249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.976387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.976405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.976414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.976423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.976441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:27.986275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.986392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.986410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.986419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.986428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.986446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:27.996246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:27.996369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:27.996387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:27.996396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:27.996405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:27.996423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:28.006312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:28.006429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:28.006447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:28.006459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:28.006468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:28.006487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.494 [2024-05-15 12:30:28.016373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.494 [2024-05-15 12:30:28.016505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.494 [2024-05-15 12:30:28.016523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.494 [2024-05-15 12:30:28.016532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.494 [2024-05-15 12:30:28.016541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.494 [2024-05-15 12:30:28.016560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.494 qpair failed and we were unable to recover it. 00:28:59.754 [2024-05-15 12:30:28.026376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.754 [2024-05-15 12:30:28.026506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.754 [2024-05-15 12:30:28.026524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.754 [2024-05-15 12:30:28.026533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.754 [2024-05-15 12:30:28.026541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.754 [2024-05-15 12:30:28.026560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.754 qpair failed and we were unable to recover it. 00:28:59.754 [2024-05-15 12:30:28.036376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.754 [2024-05-15 12:30:28.036495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.754 [2024-05-15 12:30:28.036512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.754 [2024-05-15 12:30:28.036522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.754 [2024-05-15 12:30:28.036531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.754 [2024-05-15 12:30:28.036549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.754 qpair failed and we were unable to recover it. 00:28:59.754 [2024-05-15 12:30:28.046409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.754 [2024-05-15 12:30:28.046525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.754 [2024-05-15 12:30:28.046542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.754 [2024-05-15 12:30:28.046552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.754 [2024-05-15 12:30:28.046560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.754 [2024-05-15 12:30:28.046578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.754 qpair failed and we were unable to recover it. 00:28:59.754 [2024-05-15 12:30:28.056440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.754 [2024-05-15 12:30:28.056550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.754 [2024-05-15 12:30:28.056568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.754 [2024-05-15 12:30:28.056577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.754 [2024-05-15 12:30:28.056586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.754 [2024-05-15 12:30:28.056605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.754 qpair failed and we were unable to recover it. 00:28:59.754 [2024-05-15 12:30:28.066482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.754 [2024-05-15 12:30:28.066596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.754 [2024-05-15 12:30:28.066614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.754 [2024-05-15 12:30:28.066623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.754 [2024-05-15 12:30:28.066631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.754 [2024-05-15 12:30:28.066650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.754 qpair failed and we were unable to recover it. 00:28:59.754 [2024-05-15 12:30:28.076493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.754 [2024-05-15 12:30:28.076612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.754 [2024-05-15 12:30:28.076629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.754 [2024-05-15 12:30:28.076638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.754 [2024-05-15 12:30:28.076647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.754 [2024-05-15 12:30:28.076666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.086524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.086668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.086686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.086696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.086704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.086723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.096566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.096727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.096747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.096757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.096766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.096785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.106575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.106689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.106707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.106717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.106725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.106744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.116614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.116730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.116748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.116758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.116766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.116784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.126729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.126888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.126906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.126916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.126924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.126943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.136698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.136809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.136826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.136836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.136844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.136865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.146728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.146858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.146876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.146885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.146893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.146912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.156695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.156807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.156825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.156835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.156843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.156862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.166656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.166773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.166790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.166800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.166808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.166827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.176701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.176814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.176832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.176842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.176850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.176869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.186795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.186914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.186935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.186944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.186953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.186971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.196803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.196917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.196934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.196944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.196952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.196970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.206855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.206975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.206992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.207002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.207010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.755 [2024-05-15 12:30:28.207029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.755 qpair failed and we were unable to recover it. 00:28:59.755 [2024-05-15 12:30:28.216856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.755 [2024-05-15 12:30:28.216976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.755 [2024-05-15 12:30:28.216993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.755 [2024-05-15 12:30:28.217003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.755 [2024-05-15 12:30:28.217011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.756 [2024-05-15 12:30:28.217029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.756 qpair failed and we were unable to recover it. 00:28:59.756 [2024-05-15 12:30:28.226901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.756 [2024-05-15 12:30:28.227016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.756 [2024-05-15 12:30:28.227033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.756 [2024-05-15 12:30:28.227043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.756 [2024-05-15 12:30:28.227051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.756 [2024-05-15 12:30:28.227072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.756 qpair failed and we were unable to recover it. 00:28:59.756 [2024-05-15 12:30:28.236918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.756 [2024-05-15 12:30:28.237032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.756 [2024-05-15 12:30:28.237049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.756 [2024-05-15 12:30:28.237059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.756 [2024-05-15 12:30:28.237067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.756 [2024-05-15 12:30:28.237085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.756 qpair failed and we were unable to recover it. 00:28:59.756 [2024-05-15 12:30:28.246961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.756 [2024-05-15 12:30:28.247102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.756 [2024-05-15 12:30:28.247119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.756 [2024-05-15 12:30:28.247128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.756 [2024-05-15 12:30:28.247137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.756 [2024-05-15 12:30:28.247155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.756 qpair failed and we were unable to recover it. 00:28:59.756 [2024-05-15 12:30:28.256957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.756 [2024-05-15 12:30:28.257070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.756 [2024-05-15 12:30:28.257087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.756 [2024-05-15 12:30:28.257097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.756 [2024-05-15 12:30:28.257105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.756 [2024-05-15 12:30:28.257123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.756 qpair failed and we were unable to recover it. 00:28:59.756 [2024-05-15 12:30:28.267007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.756 [2024-05-15 12:30:28.267122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.756 [2024-05-15 12:30:28.267140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.756 [2024-05-15 12:30:28.267149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.756 [2024-05-15 12:30:28.267158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.756 [2024-05-15 12:30:28.267176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.756 qpair failed and we were unable to recover it. 00:28:59.756 [2024-05-15 12:30:28.277030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.756 [2024-05-15 12:30:28.277149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.756 [2024-05-15 12:30:28.277167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.756 [2024-05-15 12:30:28.277176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.756 [2024-05-15 12:30:28.277185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:28:59.756 [2024-05-15 12:30:28.277210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:59.756 qpair failed and we were unable to recover it. 00:29:00.015 [2024-05-15 12:30:28.287083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.015 [2024-05-15 12:30:28.287243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.015 [2024-05-15 12:30:28.287261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.015 [2024-05-15 12:30:28.287271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.015 [2024-05-15 12:30:28.287279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.015 [2024-05-15 12:30:28.287298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.015 qpair failed and we were unable to recover it. 00:29:00.015 [2024-05-15 12:30:28.297100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.015 [2024-05-15 12:30:28.297225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.015 [2024-05-15 12:30:28.297243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.015 [2024-05-15 12:30:28.297253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.015 [2024-05-15 12:30:28.297261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.015 [2024-05-15 12:30:28.297280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.015 qpair failed and we were unable to recover it. 00:29:00.015 [2024-05-15 12:30:28.307136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.015 [2024-05-15 12:30:28.307256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.015 [2024-05-15 12:30:28.307274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.015 [2024-05-15 12:30:28.307283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.015 [2024-05-15 12:30:28.307292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.015 [2024-05-15 12:30:28.307311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.015 qpair failed and we were unable to recover it. 00:29:00.015 [2024-05-15 12:30:28.317150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.015 [2024-05-15 12:30:28.317268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.015 [2024-05-15 12:30:28.317285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.015 [2024-05-15 12:30:28.317295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.015 [2024-05-15 12:30:28.317307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.015 [2024-05-15 12:30:28.317326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.015 qpair failed and we were unable to recover it. 00:29:00.015 [2024-05-15 12:30:28.327168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.015 [2024-05-15 12:30:28.327322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.015 [2024-05-15 12:30:28.327339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.015 [2024-05-15 12:30:28.327349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.015 [2024-05-15 12:30:28.327357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.015 [2024-05-15 12:30:28.327375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.015 qpair failed and we were unable to recover it. 00:29:00.015 [2024-05-15 12:30:28.337220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.015 [2024-05-15 12:30:28.337334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.015 [2024-05-15 12:30:28.337352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.015 [2024-05-15 12:30:28.337361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.015 [2024-05-15 12:30:28.337370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.015 [2024-05-15 12:30:28.337388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.015 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.347155] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.347282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.347300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.347309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.347318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.347336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.357258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.357376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.357394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.357404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.357413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.357431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.367316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.367443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.367461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.367471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.367480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.367500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.377299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.377416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.377434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.377443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.377452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.377471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.387345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.387464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.387482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.387492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.387500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.387519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.397334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.397496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.397514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.397524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.397532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.397551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.407392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.407510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.407527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.407542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.407551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.407569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.417424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.417544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.417563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.417573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.417582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.417601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.427469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.427582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.427599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.427609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.427618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.427637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.437486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.437601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.437619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.437629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.437637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.437655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.447555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.447671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.447689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.447699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.447708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.447727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.457562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.457679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.457698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.457707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.457716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.457734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.467585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.467737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.467754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.016 [2024-05-15 12:30:28.467764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.016 [2024-05-15 12:30:28.467772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.016 [2024-05-15 12:30:28.467791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.016 qpair failed and we were unable to recover it. 00:29:00.016 [2024-05-15 12:30:28.477563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.016 [2024-05-15 12:30:28.477681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.016 [2024-05-15 12:30:28.477699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.017 [2024-05-15 12:30:28.477709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.017 [2024-05-15 12:30:28.477717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.017 [2024-05-15 12:30:28.477735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.017 qpair failed and we were unable to recover it. 00:29:00.017 [2024-05-15 12:30:28.487638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.017 [2024-05-15 12:30:28.487792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.017 [2024-05-15 12:30:28.487810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.017 [2024-05-15 12:30:28.487820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.017 [2024-05-15 12:30:28.487828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.017 [2024-05-15 12:30:28.487847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.017 qpair failed and we were unable to recover it. 00:29:00.017 [2024-05-15 12:30:28.497629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.017 [2024-05-15 12:30:28.497746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.017 [2024-05-15 12:30:28.497767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.017 [2024-05-15 12:30:28.497777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.017 [2024-05-15 12:30:28.497785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.017 [2024-05-15 12:30:28.497803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.017 qpair failed and we were unable to recover it. 00:29:00.017 [2024-05-15 12:30:28.507695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.017 [2024-05-15 12:30:28.507809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.017 [2024-05-15 12:30:28.507826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.017 [2024-05-15 12:30:28.507835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.017 [2024-05-15 12:30:28.507844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.017 [2024-05-15 12:30:28.507862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.017 qpair failed and we were unable to recover it. 00:29:00.017 [2024-05-15 12:30:28.517723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.017 [2024-05-15 12:30:28.517837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.017 [2024-05-15 12:30:28.517854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.017 [2024-05-15 12:30:28.517864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.017 [2024-05-15 12:30:28.517872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.017 [2024-05-15 12:30:28.517890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.017 qpair failed and we were unable to recover it. 00:29:00.017 [2024-05-15 12:30:28.527760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.017 [2024-05-15 12:30:28.527885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.017 [2024-05-15 12:30:28.527903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.017 [2024-05-15 12:30:28.527912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.017 [2024-05-15 12:30:28.527920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.017 [2024-05-15 12:30:28.527939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.017 qpair failed and we were unable to recover it. 00:29:00.017 [2024-05-15 12:30:28.537759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.017 [2024-05-15 12:30:28.537872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.017 [2024-05-15 12:30:28.537890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.017 [2024-05-15 12:30:28.537899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.017 [2024-05-15 12:30:28.537908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.017 [2024-05-15 12:30:28.537925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.017 qpair failed and we were unable to recover it. 00:29:00.276 [2024-05-15 12:30:28.547820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.276 [2024-05-15 12:30:28.547939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.276 [2024-05-15 12:30:28.547956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.276 [2024-05-15 12:30:28.547966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.276 [2024-05-15 12:30:28.547974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.276 [2024-05-15 12:30:28.547992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.276 qpair failed and we were unable to recover it. 00:29:00.276 [2024-05-15 12:30:28.557912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.276 [2024-05-15 12:30:28.558069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.276 [2024-05-15 12:30:28.558086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.276 [2024-05-15 12:30:28.558096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.276 [2024-05-15 12:30:28.558104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.276 [2024-05-15 12:30:28.558122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.276 qpair failed and we were unable to recover it. 00:29:00.276 [2024-05-15 12:30:28.567807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.276 [2024-05-15 12:30:28.567929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.276 [2024-05-15 12:30:28.567947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.276 [2024-05-15 12:30:28.567957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.276 [2024-05-15 12:30:28.567965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.276 [2024-05-15 12:30:28.567983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.276 qpair failed and we were unable to recover it. 00:29:00.276 [2024-05-15 12:30:28.577837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.276 [2024-05-15 12:30:28.577951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.276 [2024-05-15 12:30:28.577969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.276 [2024-05-15 12:30:28.577978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.276 [2024-05-15 12:30:28.577987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.276 [2024-05-15 12:30:28.578005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.276 qpair failed and we were unable to recover it. 00:29:00.276 [2024-05-15 12:30:28.587933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.276 [2024-05-15 12:30:28.588048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.276 [2024-05-15 12:30:28.588069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.276 [2024-05-15 12:30:28.588079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.276 [2024-05-15 12:30:28.588087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.276 [2024-05-15 12:30:28.588106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.276 qpair failed and we were unable to recover it. 00:29:00.276 [2024-05-15 12:30:28.597947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.276 [2024-05-15 12:30:28.598066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.276 [2024-05-15 12:30:28.598084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.276 [2024-05-15 12:30:28.598094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.276 [2024-05-15 12:30:28.598102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.276 [2024-05-15 12:30:28.598121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.276 qpair failed and we were unable to recover it. 00:29:00.276 [2024-05-15 12:30:28.607914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.276 [2024-05-15 12:30:28.608026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.608043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.608053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.608061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.608080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.618018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.618132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.618150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.618159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.618168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.618187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.628052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.628164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.628182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.628198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.628207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.628229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.637992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.638113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.638131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.638140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.638149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.638167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.648099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.648404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.648422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.648432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.648441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.648460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.658132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.658249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.658267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.658276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.658285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.658303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.668152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.668447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.668466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.668475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.668484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.668503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.678175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.678301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.678322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.678332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.678340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.678359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.688225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.688339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.688356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.688366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.688375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.688394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.698225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.698334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.698351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.698361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.698369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.698388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.708274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.708386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.708404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.708414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.708423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.708442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.718285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.718399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.718417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.718426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.718438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.718457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.728349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.728475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.728492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.728502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.728510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.728529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.277 [2024-05-15 12:30:28.738318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.277 [2024-05-15 12:30:28.738477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.277 [2024-05-15 12:30:28.738495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.277 [2024-05-15 12:30:28.738505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.277 [2024-05-15 12:30:28.738513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.277 [2024-05-15 12:30:28.738531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.277 qpair failed and we were unable to recover it. 00:29:00.278 [2024-05-15 12:30:28.748338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.278 [2024-05-15 12:30:28.748453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.278 [2024-05-15 12:30:28.748471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.278 [2024-05-15 12:30:28.748481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.278 [2024-05-15 12:30:28.748489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.278 [2024-05-15 12:30:28.748508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.278 qpair failed and we were unable to recover it. 00:29:00.278 [2024-05-15 12:30:28.758416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.278 [2024-05-15 12:30:28.758537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.278 [2024-05-15 12:30:28.758555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.278 [2024-05-15 12:30:28.758565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.278 [2024-05-15 12:30:28.758573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.278 [2024-05-15 12:30:28.758592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.278 qpair failed and we were unable to recover it. 00:29:00.278 [2024-05-15 12:30:28.768474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.278 [2024-05-15 12:30:28.768643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.278 [2024-05-15 12:30:28.768661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.278 [2024-05-15 12:30:28.768670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.278 [2024-05-15 12:30:28.768678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.278 [2024-05-15 12:30:28.768698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.278 qpair failed and we were unable to recover it. 00:29:00.278 [2024-05-15 12:30:28.778420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.278 [2024-05-15 12:30:28.778549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.278 [2024-05-15 12:30:28.778567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.278 [2024-05-15 12:30:28.778576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.278 [2024-05-15 12:30:28.778585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.278 [2024-05-15 12:30:28.778603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.278 qpair failed and we were unable to recover it. 00:29:00.278 [2024-05-15 12:30:28.788448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.278 [2024-05-15 12:30:28.788602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.278 [2024-05-15 12:30:28.788620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.278 [2024-05-15 12:30:28.788629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.278 [2024-05-15 12:30:28.788638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.278 [2024-05-15 12:30:28.788656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.278 qpair failed and we were unable to recover it. 00:29:00.278 [2024-05-15 12:30:28.798499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.278 [2024-05-15 12:30:28.798615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.278 [2024-05-15 12:30:28.798632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.278 [2024-05-15 12:30:28.798642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.278 [2024-05-15 12:30:28.798650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.278 [2024-05-15 12:30:28.798669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.278 qpair failed and we were unable to recover it. 00:29:00.537 [2024-05-15 12:30:28.808565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.537 [2024-05-15 12:30:28.808691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.537 [2024-05-15 12:30:28.808709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.537 [2024-05-15 12:30:28.808722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.537 [2024-05-15 12:30:28.808730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.537 [2024-05-15 12:30:28.808748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.537 qpair failed and we were unable to recover it. 00:29:00.537 [2024-05-15 12:30:28.818534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.537 [2024-05-15 12:30:28.818652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.537 [2024-05-15 12:30:28.818670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.537 [2024-05-15 12:30:28.818680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.537 [2024-05-15 12:30:28.818688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.537 [2024-05-15 12:30:28.818706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.537 qpair failed and we were unable to recover it. 00:29:00.537 [2024-05-15 12:30:28.828544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.537 [2024-05-15 12:30:28.828667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.537 [2024-05-15 12:30:28.828685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.828694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.828703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.828722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.838633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.838750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.838768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.838777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.838786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.838804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.848662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.848780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.848798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.848807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.848816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.848834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.858713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.858829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.858847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.858856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.858865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.858883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.868740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.868857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.868875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.868884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.868893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.868911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.878707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.878825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.878843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.878852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.878861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.878880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.888793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.889075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.889093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.889103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.889112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.889130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.898861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.898974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.898991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.899004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.899012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.899031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.908793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.908906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.908924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.908934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.908942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.908960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.918857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.918971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.918989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.918999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.919007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.919025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.928959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.929076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.929094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.929104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.929112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.929130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.938943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.939241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.939260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.939270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.939278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.939297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.948909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.949027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.949044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.949054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.949062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.949081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.958980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.538 [2024-05-15 12:30:28.959100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.538 [2024-05-15 12:30:28.959117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.538 [2024-05-15 12:30:28.959126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.538 [2024-05-15 12:30:28.959135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.538 [2024-05-15 12:30:28.959153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.538 qpair failed and we were unable to recover it. 00:29:00.538 [2024-05-15 12:30:28.968948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:28.969069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:28.969087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:28.969097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:28.969105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:28.969124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:28.979035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:28.979148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:28.979166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:28.979175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:28.979184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:28.979207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:28.989054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:28.989161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:28.989182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:28.989199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:28.989208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:28.989226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:28.999141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:28.999260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:28.999277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:28.999287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:28.999295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:28.999313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:29.009117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:29.009236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:29.009255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:29.009264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:29.009273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:29.009292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:29.019198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:29.019350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:29.019368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:29.019377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:29.019386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:29.019404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:29.029189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:29.029308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:29.029326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:29.029336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:29.029344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:29.029365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:29.039237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:29.039355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:29.039373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:29.039382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:29.039391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:29.039409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:29.049297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:29.049432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:29.049449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:29.049458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:29.049467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:29.049486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.539 [2024-05-15 12:30:29.059280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.539 [2024-05-15 12:30:29.059398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.539 [2024-05-15 12:30:29.059416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.539 [2024-05-15 12:30:29.059426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.539 [2024-05-15 12:30:29.059434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.539 [2024-05-15 12:30:29.059453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.539 qpair failed and we were unable to recover it. 00:29:00.804 [2024-05-15 12:30:29.069318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.804 [2024-05-15 12:30:29.069435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.804 [2024-05-15 12:30:29.069453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.804 [2024-05-15 12:30:29.069463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.804 [2024-05-15 12:30:29.069471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.804 [2024-05-15 12:30:29.069489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.804 qpair failed and we were unable to recover it. 00:29:00.804 [2024-05-15 12:30:29.079332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.804 [2024-05-15 12:30:29.079451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.804 [2024-05-15 12:30:29.079472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.804 [2024-05-15 12:30:29.079482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.804 [2024-05-15 12:30:29.079490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.804 [2024-05-15 12:30:29.079508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.804 qpair failed and we were unable to recover it. 00:29:00.804 [2024-05-15 12:30:29.089275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.804 [2024-05-15 12:30:29.089411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.804 [2024-05-15 12:30:29.089429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.804 [2024-05-15 12:30:29.089438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.804 [2024-05-15 12:30:29.089446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.089464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.099397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.099511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.099528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.099538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.099546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.099564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.109417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.109541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.109560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.109570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.109579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.109598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.119432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.119549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.119566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.119576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.119589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.119608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.129483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.129596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.129614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.129623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.129632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.129650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.139517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.139634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.139652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.139661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.139670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.139688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.149536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.149647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.149664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.149674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.149682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.149700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.159548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.159661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.159678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.159688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.159696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.159715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.169592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.169712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.169730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.169740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.169748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.169767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.179620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.179737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.179754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.179763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.179772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.179789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.189653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.189769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.189786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.189796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.189804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.189822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.199661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.199772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.199789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.199799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.199807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.199825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.209702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.209823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.209840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.209850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.209862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.209879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.219732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.219849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.219867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.805 [2024-05-15 12:30:29.219876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.805 [2024-05-15 12:30:29.219885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.805 [2024-05-15 12:30:29.219903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.805 qpair failed and we were unable to recover it. 00:29:00.805 [2024-05-15 12:30:29.229782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.805 [2024-05-15 12:30:29.229920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.805 [2024-05-15 12:30:29.229937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.229947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.229955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.229973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.239771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.239892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.239909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.239918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.239927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.239945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.249813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.249931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.249948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.249958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.249966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.249984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.259843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.259956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.259974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.259983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.259992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.260010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.269879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.269989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.270007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.270016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.270025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.270043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.279887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.280001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.280019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.280028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.280037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.280055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.289943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.290055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.290073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.290082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.290091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.290110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.299969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.300081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.300099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.300111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.300120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.300137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.309987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.310096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.310113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.310123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.310131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.310149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:00.806 [2024-05-15 12:30:29.320003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.806 [2024-05-15 12:30:29.320126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.806 [2024-05-15 12:30:29.320144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.806 [2024-05-15 12:30:29.320153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.806 [2024-05-15 12:30:29.320162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:00.806 [2024-05-15 12:30:29.320180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:00.806 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.329998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.330121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.330138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.330148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.330156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.330174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.340074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.340201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.340219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.340229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.340237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.340256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.350129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.350254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.350271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.350281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.350289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.350308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.360138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.360263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.360281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.360291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.360299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.360317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.370157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.370276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.370294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.370303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.370312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.370331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.380114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.380237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.380254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.380264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.380272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.380291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.390222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.390334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.390355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.390364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.390372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.390391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.400167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.400292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.400309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.400319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.400327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.400346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.410269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.410385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.410402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.410412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.410421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.410439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.420297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.420415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.420433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.420443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.092 [2024-05-15 12:30:29.420452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.092 [2024-05-15 12:30:29.420471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.092 qpair failed and we were unable to recover it. 00:29:01.092 [2024-05-15 12:30:29.430366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.092 [2024-05-15 12:30:29.430525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.092 [2024-05-15 12:30:29.430542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.092 [2024-05-15 12:30:29.430552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.430560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.430582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.440347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.440460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.440478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.440487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.440496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.440514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.450398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.450517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.450534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.450544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.450552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.450570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.460414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.460531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.460550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.460559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.460568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.460587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.470451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.470560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.470578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.470588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.470596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.470614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.480468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.480580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.480600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.480610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.480618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.480636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.490500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.490613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.490631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.490640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.490649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.490667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.500519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.500638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.500656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.500665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.500674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.500692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.510488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.510595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.510612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.510622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.510630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.510649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.520557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.520710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.520728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.520737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.520749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.520767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.530623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.530749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.530764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.530773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.530782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.530799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.540628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.540917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.540936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.540946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.540954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.540973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.550676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.550796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.550813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.550822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.550831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.550849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.560670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.560786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.093 [2024-05-15 12:30:29.560804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.093 [2024-05-15 12:30:29.560813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.093 [2024-05-15 12:30:29.560822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.093 [2024-05-15 12:30:29.560839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.093 qpair failed and we were unable to recover it. 00:29:01.093 [2024-05-15 12:30:29.570753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.093 [2024-05-15 12:30:29.570921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-05-15 12:30:29.570939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-05-15 12:30:29.570949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-05-15 12:30:29.570958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.094 [2024-05-15 12:30:29.570977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-05-15 12:30:29.580787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-05-15 12:30:29.580919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-05-15 12:30:29.580937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-05-15 12:30:29.580946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-05-15 12:30:29.580955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.094 [2024-05-15 12:30:29.580973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-05-15 12:30:29.590792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-05-15 12:30:29.590940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-05-15 12:30:29.590958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-05-15 12:30:29.590968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-05-15 12:30:29.590976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.094 [2024-05-15 12:30:29.590995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-05-15 12:30:29.600790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-05-15 12:30:29.600907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-05-15 12:30:29.600924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-05-15 12:30:29.600934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-05-15 12:30:29.600942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.094 [2024-05-15 12:30:29.600961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.094 [2024-05-15 12:30:29.610837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.094 [2024-05-15 12:30:29.610955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.094 [2024-05-15 12:30:29.610973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.094 [2024-05-15 12:30:29.610983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.094 [2024-05-15 12:30:29.610994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.094 [2024-05-15 12:30:29.611013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.094 qpair failed and we were unable to recover it. 00:29:01.353 [2024-05-15 12:30:29.621026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.353 [2024-05-15 12:30:29.621198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.353 [2024-05-15 12:30:29.621216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.353 [2024-05-15 12:30:29.621226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.353 [2024-05-15 12:30:29.621234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.353 [2024-05-15 12:30:29.621254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.353 qpair failed and we were unable to recover it. 00:29:01.353 [2024-05-15 12:30:29.630889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.353 [2024-05-15 12:30:29.631027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.353 [2024-05-15 12:30:29.631044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.353 [2024-05-15 12:30:29.631053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.353 [2024-05-15 12:30:29.631062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.353 [2024-05-15 12:30:29.631080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.353 qpair failed and we were unable to recover it. 00:29:01.353 [2024-05-15 12:30:29.640985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.353 [2024-05-15 12:30:29.641107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.353 [2024-05-15 12:30:29.641125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.353 [2024-05-15 12:30:29.641135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.353 [2024-05-15 12:30:29.641143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.353 [2024-05-15 12:30:29.641162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.353 qpair failed and we were unable to recover it. 00:29:01.353 [2024-05-15 12:30:29.650958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.353 [2024-05-15 12:30:29.651075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.353 [2024-05-15 12:30:29.651092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.353 [2024-05-15 12:30:29.651102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.353 [2024-05-15 12:30:29.651110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.353 [2024-05-15 12:30:29.651128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.353 qpair failed and we were unable to recover it. 00:29:01.353 [2024-05-15 12:30:29.660979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.353 [2024-05-15 12:30:29.661095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.353 [2024-05-15 12:30:29.661113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.353 [2024-05-15 12:30:29.661122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.353 [2024-05-15 12:30:29.661131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.353 [2024-05-15 12:30:29.661149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.353 qpair failed and we were unable to recover it. 00:29:01.353 [2024-05-15 12:30:29.671024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.353 [2024-05-15 12:30:29.671154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.353 [2024-05-15 12:30:29.671172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.353 [2024-05-15 12:30:29.671182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.353 [2024-05-15 12:30:29.671196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.353 [2024-05-15 12:30:29.671215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.353 qpair failed and we were unable to recover it. 00:29:01.353 [2024-05-15 12:30:29.681019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.353 [2024-05-15 12:30:29.681133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.353 [2024-05-15 12:30:29.681151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.353 [2024-05-15 12:30:29.681160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.353 [2024-05-15 12:30:29.681169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.353 [2024-05-15 12:30:29.681187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.353 qpair failed and we were unable to recover it. 00:29:01.353 [2024-05-15 12:30:29.691057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.353 [2024-05-15 12:30:29.691182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.353 [2024-05-15 12:30:29.691206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.353 [2024-05-15 12:30:29.691216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.353 [2024-05-15 12:30:29.691225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.353 [2024-05-15 12:30:29.691243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.353 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.701104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.701233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.701251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.701264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.701272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.701290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.711162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.711290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.711307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.711317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.711325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.711343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.721140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.721264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.721282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.721291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.721300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.721318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.731198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.731325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.731342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.731352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.731360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.731378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.741215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.741326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.741343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.741353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.741361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.741380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.751244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.751360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.751377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.751387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.751395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.751414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.761252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.761366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.761384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.761393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.761402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.761419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.771298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.771418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.771436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.771445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.771454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.771472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.781331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.781444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.781462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.781471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.781479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.781497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.791363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.791477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.791498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.791507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.791516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.791534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.801399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.801513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.801531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.801540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.801549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.801566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.811383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.811503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.811521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.811531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.811539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.811558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.821413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.821531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.821549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.821558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.821566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.354 [2024-05-15 12:30:29.821585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.354 qpair failed and we were unable to recover it. 00:29:01.354 [2024-05-15 12:30:29.831452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.354 [2024-05-15 12:30:29.831587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.354 [2024-05-15 12:30:29.831605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.354 [2024-05-15 12:30:29.831614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.354 [2024-05-15 12:30:29.831623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.355 [2024-05-15 12:30:29.831644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-05-15 12:30:29.841483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-05-15 12:30:29.841601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-05-15 12:30:29.841619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-05-15 12:30:29.841629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-05-15 12:30:29.841638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.355 [2024-05-15 12:30:29.841657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-05-15 12:30:29.851526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-05-15 12:30:29.851641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-05-15 12:30:29.851658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-05-15 12:30:29.851668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-05-15 12:30:29.851677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.355 [2024-05-15 12:30:29.851696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-05-15 12:30:29.861551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-05-15 12:30:29.861666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-05-15 12:30:29.861684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-05-15 12:30:29.861693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-05-15 12:30:29.861702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.355 [2024-05-15 12:30:29.861721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.355 [2024-05-15 12:30:29.871597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.355 [2024-05-15 12:30:29.871709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.355 [2024-05-15 12:30:29.871726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.355 [2024-05-15 12:30:29.871736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.355 [2024-05-15 12:30:29.871745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.355 [2024-05-15 12:30:29.871763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.355 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.881629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.881756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.881779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.881788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.881797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.881816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.891637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.891761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.891778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.891788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.891796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.891815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.901674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.901785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.901803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.901812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.901821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.901839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.911694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.911811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.911828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.911838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.911847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.911866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.921736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.921852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.921869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.921879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.921887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.921909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.931738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.931854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.931872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.931881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.931890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.931908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.941786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.941907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.941924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.941934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.941942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.941961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.951810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.951923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.951940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.951949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.951958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.951977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.961816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.961931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.961948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.961957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.961965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.614 [2024-05-15 12:30:29.961984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-05-15 12:30:29.971858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.614 [2024-05-15 12:30:29.971976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.614 [2024-05-15 12:30:29.971994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.614 [2024-05-15 12:30:29.972004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.614 [2024-05-15 12:30:29.972012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:29.972031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:29.981824] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:29.981941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:29.981959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:29.981968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:29.981977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:29.981995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:29.991823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:29.991940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:29.991958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:29.991968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:29.991976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:29.991994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.002110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.002235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.002254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.002264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.002272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.002291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.011949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.012062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.012082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.012092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.012104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.012124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.021946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.022064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.022086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.022097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.022106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.022125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.032011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.032130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.032150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.032162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.032172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.032197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.042064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.042188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.042214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.042224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.042233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.042252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.052147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.052276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.052297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.052309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.052318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.052339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.062144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.062264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.062283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.062294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.062302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.062321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.072158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.072283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.072302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.072312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.072320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.072340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.082152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.082274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.082293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.082303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.082312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.082331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.092138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.092297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.092318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.092329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.092338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.092358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.102204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.102318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.102338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.615 [2024-05-15 12:30:30.102351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.615 [2024-05-15 12:30:30.102360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.615 [2024-05-15 12:30:30.102380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-05-15 12:30:30.112252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.615 [2024-05-15 12:30:30.112370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.615 [2024-05-15 12:30:30.112389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.616 [2024-05-15 12:30:30.112399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.616 [2024-05-15 12:30:30.112408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.616 [2024-05-15 12:30:30.112428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-05-15 12:30:30.122262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.616 [2024-05-15 12:30:30.122381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.616 [2024-05-15 12:30:30.122400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.616 [2024-05-15 12:30:30.122410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.616 [2024-05-15 12:30:30.122418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.616 [2024-05-15 12:30:30.122437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-05-15 12:30:30.132304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.616 [2024-05-15 12:30:30.132454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.616 [2024-05-15 12:30:30.132473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.616 [2024-05-15 12:30:30.132483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.616 [2024-05-15 12:30:30.132492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.616 [2024-05-15 12:30:30.132510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.875 [2024-05-15 12:30:30.142405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.875 [2024-05-15 12:30:30.142566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.875 [2024-05-15 12:30:30.142584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.875 [2024-05-15 12:30:30.142594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.875 [2024-05-15 12:30:30.142603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.875 [2024-05-15 12:30:30.142621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.875 qpair failed and we were unable to recover it. 00:29:01.875 [2024-05-15 12:30:30.152334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.875 [2024-05-15 12:30:30.152492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.875 [2024-05-15 12:30:30.152511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.875 [2024-05-15 12:30:30.152521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.875 [2024-05-15 12:30:30.152529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.875 [2024-05-15 12:30:30.152548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.875 qpair failed and we were unable to recover it. 00:29:01.875 [2024-05-15 12:30:30.162386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.875 [2024-05-15 12:30:30.162508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.875 [2024-05-15 12:30:30.162527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.875 [2024-05-15 12:30:30.162537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.875 [2024-05-15 12:30:30.162546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.875 [2024-05-15 12:30:30.162564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.875 qpair failed and we were unable to recover it. 00:29:01.875 [2024-05-15 12:30:30.172447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.875 [2024-05-15 12:30:30.172568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.875 [2024-05-15 12:30:30.172587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.875 [2024-05-15 12:30:30.172597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.875 [2024-05-15 12:30:30.172605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.875 [2024-05-15 12:30:30.172624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.875 qpair failed and we were unable to recover it. 00:29:01.875 [2024-05-15 12:30:30.182427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.875 [2024-05-15 12:30:30.182544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.875 [2024-05-15 12:30:30.182563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.875 [2024-05-15 12:30:30.182573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.875 [2024-05-15 12:30:30.182582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.875 [2024-05-15 12:30:30.182601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.875 qpair failed and we were unable to recover it. 00:29:01.875 [2024-05-15 12:30:30.192482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.875 [2024-05-15 12:30:30.192595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.875 [2024-05-15 12:30:30.192617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.875 [2024-05-15 12:30:30.192627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.875 [2024-05-15 12:30:30.192636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.875 [2024-05-15 12:30:30.192654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.875 qpair failed and we were unable to recover it. 00:29:01.875 [2024-05-15 12:30:30.202454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.202568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.202587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.202597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.202606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.202624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.212513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.212630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.212648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.212658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.212667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.212686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.222492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.222607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.222626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.222636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.222644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.222663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.232605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.232727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.232746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.232756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.232765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.232783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.242561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.242713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.242732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.242742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.242751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.242769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.252657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.252776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.252795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.252804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.252813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.252832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.262682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.262965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.262984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.262994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.263003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.263021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.272724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.272836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.272854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.272865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.272873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.272892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.282718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.282836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.282858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.282867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.282876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.282895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.292692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.292812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.292831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.292840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.292849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.292867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.302793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.302906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.302925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.302935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.302943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.302962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.312827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.312941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.312960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.312970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.312979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.312997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.322789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.322904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.322922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.322932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.322941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.322962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.876 qpair failed and we were unable to recover it. 00:29:01.876 [2024-05-15 12:30:30.332890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.876 [2024-05-15 12:30:30.333006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.876 [2024-05-15 12:30:30.333025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.876 [2024-05-15 12:30:30.333034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.876 [2024-05-15 12:30:30.333043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.876 [2024-05-15 12:30:30.333062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.877 qpair failed and we were unable to recover it. 00:29:01.877 [2024-05-15 12:30:30.342916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.877 [2024-05-15 12:30:30.343027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.877 [2024-05-15 12:30:30.343046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.877 [2024-05-15 12:30:30.343055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.877 [2024-05-15 12:30:30.343064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.877 [2024-05-15 12:30:30.343082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.877 qpair failed and we were unable to recover it. 00:29:01.877 [2024-05-15 12:30:30.352936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.877 [2024-05-15 12:30:30.353053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.877 [2024-05-15 12:30:30.353071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.877 [2024-05-15 12:30:30.353080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.877 [2024-05-15 12:30:30.353089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.877 [2024-05-15 12:30:30.353107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.877 qpair failed and we were unable to recover it. 00:29:01.877 [2024-05-15 12:30:30.362971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.877 [2024-05-15 12:30:30.363088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.877 [2024-05-15 12:30:30.363107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.877 [2024-05-15 12:30:30.363117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.877 [2024-05-15 12:30:30.363125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.877 [2024-05-15 12:30:30.363144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.877 qpair failed and we were unable to recover it. 00:29:01.877 [2024-05-15 12:30:30.373026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.877 [2024-05-15 12:30:30.373162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.877 [2024-05-15 12:30:30.373184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.877 [2024-05-15 12:30:30.373201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.877 [2024-05-15 12:30:30.373210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.877 [2024-05-15 12:30:30.373229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.877 qpair failed and we were unable to recover it. 00:29:01.877 [2024-05-15 12:30:30.382998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.877 [2024-05-15 12:30:30.383109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.877 [2024-05-15 12:30:30.383127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.877 [2024-05-15 12:30:30.383137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.877 [2024-05-15 12:30:30.383145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.877 [2024-05-15 12:30:30.383164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.877 qpair failed and we were unable to recover it. 00:29:01.877 [2024-05-15 12:30:30.393051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.877 [2024-05-15 12:30:30.393167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.877 [2024-05-15 12:30:30.393185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.877 [2024-05-15 12:30:30.393202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.877 [2024-05-15 12:30:30.393211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.877 [2024-05-15 12:30:30.393229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.877 qpair failed and we were unable to recover it. 00:29:01.877 [2024-05-15 12:30:30.403069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.877 [2024-05-15 12:30:30.403196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.877 [2024-05-15 12:30:30.403215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.877 [2024-05-15 12:30:30.403225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.877 [2024-05-15 12:30:30.403234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:01.877 [2024-05-15 12:30:30.403252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.877 qpair failed and we were unable to recover it. 00:29:02.135 [2024-05-15 12:30:30.413095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.135 [2024-05-15 12:30:30.413223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.135 [2024-05-15 12:30:30.413241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.135 [2024-05-15 12:30:30.413251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.135 [2024-05-15 12:30:30.413263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.135 [2024-05-15 12:30:30.413282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.135 qpair failed and we were unable to recover it. 00:29:02.135 [2024-05-15 12:30:30.423141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.135 [2024-05-15 12:30:30.423273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.135 [2024-05-15 12:30:30.423291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.135 [2024-05-15 12:30:30.423301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.135 [2024-05-15 12:30:30.423310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.135 [2024-05-15 12:30:30.423329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.135 qpair failed and we were unable to recover it. 00:29:02.135 [2024-05-15 12:30:30.433176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.135 [2024-05-15 12:30:30.433304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.135 [2024-05-15 12:30:30.433323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.135 [2024-05-15 12:30:30.433332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.135 [2024-05-15 12:30:30.433341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.135 [2024-05-15 12:30:30.433359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.135 qpair failed and we were unable to recover it. 00:29:02.135 [2024-05-15 12:30:30.443198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.135 [2024-05-15 12:30:30.443315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.135 [2024-05-15 12:30:30.443333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.135 [2024-05-15 12:30:30.443343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.135 [2024-05-15 12:30:30.443352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.135 [2024-05-15 12:30:30.443370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.135 qpair failed and we were unable to recover it. 00:29:02.135 [2024-05-15 12:30:30.453201] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.135 [2024-05-15 12:30:30.453321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.135 [2024-05-15 12:30:30.453338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.135 [2024-05-15 12:30:30.453347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.135 [2024-05-15 12:30:30.453356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.135 [2024-05-15 12:30:30.453375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.135 qpair failed and we were unable to recover it. 00:29:02.135 [2024-05-15 12:30:30.463255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.135 [2024-05-15 12:30:30.463373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.135 [2024-05-15 12:30:30.463393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.135 [2024-05-15 12:30:30.463403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.135 [2024-05-15 12:30:30.463411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.463430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.473294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.473411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.473430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.473440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.473448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.473468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.483296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.483414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.483437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.483447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.483455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.483475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.493346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.493464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.493483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.493492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.493501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.493520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.503355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.503473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.503491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.503505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.503514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.503532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.513354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.513488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.513506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.513516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.513525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.513543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.523398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.523513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.523531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.523541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.523550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.523568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.533455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.533573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.533592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.533602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.533610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.533629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.543461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.543577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.543596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.543606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.543614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.543632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.553495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.553609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.553628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.553638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.553646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.553665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.563511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.563629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.563647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.563657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.563666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.563684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.573543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.573659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.573678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.573688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.573697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.573716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.583545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.583663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.583682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.583691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.583700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.583718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.593598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.593711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.593730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.136 [2024-05-15 12:30:30.593743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.136 [2024-05-15 12:30:30.593752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.136 [2024-05-15 12:30:30.593771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.136 qpair failed and we were unable to recover it. 00:29:02.136 [2024-05-15 12:30:30.603615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.136 [2024-05-15 12:30:30.603729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.136 [2024-05-15 12:30:30.603748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.137 [2024-05-15 12:30:30.603758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.137 [2024-05-15 12:30:30.603766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.137 [2024-05-15 12:30:30.603785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.137 qpair failed and we were unable to recover it. 00:29:02.137 [2024-05-15 12:30:30.613649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.137 [2024-05-15 12:30:30.613765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.137 [2024-05-15 12:30:30.613784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.137 [2024-05-15 12:30:30.613794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.137 [2024-05-15 12:30:30.613802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.137 [2024-05-15 12:30:30.613821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.137 qpair failed and we were unable to recover it. 00:29:02.137 [2024-05-15 12:30:30.623682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.137 [2024-05-15 12:30:30.623793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.137 [2024-05-15 12:30:30.623811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.137 [2024-05-15 12:30:30.623821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.137 [2024-05-15 12:30:30.623830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.137 [2024-05-15 12:30:30.623848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.137 qpair failed and we were unable to recover it. 00:29:02.137 [2024-05-15 12:30:30.633763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.137 [2024-05-15 12:30:30.633896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.137 [2024-05-15 12:30:30.633915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.137 [2024-05-15 12:30:30.633925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.137 [2024-05-15 12:30:30.633933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.137 [2024-05-15 12:30:30.633953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.137 qpair failed and we were unable to recover it. 00:29:02.137 [2024-05-15 12:30:30.643722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.137 [2024-05-15 12:30:30.643836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.137 [2024-05-15 12:30:30.643854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.137 [2024-05-15 12:30:30.643864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.137 [2024-05-15 12:30:30.643873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.137 [2024-05-15 12:30:30.643891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.137 qpair failed and we were unable to recover it. 00:29:02.137 [2024-05-15 12:30:30.653773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.137 [2024-05-15 12:30:30.653930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.137 [2024-05-15 12:30:30.653948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.137 [2024-05-15 12:30:30.653958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.137 [2024-05-15 12:30:30.653967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.137 [2024-05-15 12:30:30.653986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.137 qpair failed and we were unable to recover it. 00:29:02.396 [2024-05-15 12:30:30.663811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.396 [2024-05-15 12:30:30.663935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.396 [2024-05-15 12:30:30.663954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.396 [2024-05-15 12:30:30.663964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.396 [2024-05-15 12:30:30.663972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.396 [2024-05-15 12:30:30.663991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-05-15 12:30:30.673793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.396 [2024-05-15 12:30:30.673912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.396 [2024-05-15 12:30:30.673931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.396 [2024-05-15 12:30:30.673942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.396 [2024-05-15 12:30:30.673951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.396 [2024-05-15 12:30:30.673969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.396 qpair failed and we were unable to recover it. 00:29:02.396 [2024-05-15 12:30:30.683854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.396 [2024-05-15 12:30:30.683975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.396 [2024-05-15 12:30:30.683997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.684007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.684016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.684035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.693888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.694008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.694027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.694037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.694046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.694065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.703919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.704039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.704057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.704067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.704076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.704094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.713915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.714030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.714049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.714059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.714067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.714085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.723960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.724079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.724098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.724108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.724117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.724139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.734003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.734119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.734137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.734147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.734155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.734175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.744032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.744147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.744165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.744176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.744184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.744210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.753983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.754095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.754113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.754123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.754132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.754150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.764069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.764182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.764207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.764217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.764226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.764245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.774143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.774267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.774289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.774299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.774307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.774326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.784144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.784267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.784286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.784296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.784304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.784323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.794165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.794307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.794326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.794336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.794344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.794363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.804163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.804465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.804484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.804494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.804502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.804521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.397 [2024-05-15 12:30:30.814229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.397 [2024-05-15 12:30:30.814345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.397 [2024-05-15 12:30:30.814364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.397 [2024-05-15 12:30:30.814374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.397 [2024-05-15 12:30:30.814386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.397 [2024-05-15 12:30:30.814404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.397 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.824353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.824468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.824486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.824496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.824505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.824523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.834287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.834402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.834420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.834430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.834438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.834458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.844313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.844427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.844445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.844455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.844464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.844483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.854338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.854456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.854474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.854484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.854493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.854512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.864394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.864517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.864536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.864546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.864555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.864574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.874592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.874709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.874727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.874737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.874746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.874765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.884424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.884544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.884563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.884573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.884581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.884600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.894643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.894768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.894786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.894796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.894805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.894824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.904487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.904603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.904622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.904632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.904644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.904663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.914524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.914635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.914654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.914664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.914672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.914691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.398 [2024-05-15 12:30:30.924551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.398 [2024-05-15 12:30:30.924684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.398 [2024-05-15 12:30:30.924702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.398 [2024-05-15 12:30:30.924712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.398 [2024-05-15 12:30:30.924720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.398 [2024-05-15 12:30:30.924739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.398 qpair failed and we were unable to recover it. 00:29:02.657 [2024-05-15 12:30:30.934576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.657 [2024-05-15 12:30:30.934699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.657 [2024-05-15 12:30:30.934717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.657 [2024-05-15 12:30:30.934727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.657 [2024-05-15 12:30:30.934735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.657 [2024-05-15 12:30:30.934754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.657 qpair failed and we were unable to recover it. 00:29:02.657 [2024-05-15 12:30:30.944595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.657 [2024-05-15 12:30:30.944709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.657 [2024-05-15 12:30:30.944728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.657 [2024-05-15 12:30:30.944738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.657 [2024-05-15 12:30:30.944746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.657 [2024-05-15 12:30:30.944764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.657 qpair failed and we were unable to recover it. 00:29:02.657 [2024-05-15 12:30:30.954625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.657 [2024-05-15 12:30:30.954741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.657 [2024-05-15 12:30:30.954759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:30.954768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:30.954777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:30.954796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:30.964590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:30.964703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:30.964721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:30.964731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:30.964740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:30.964758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:30.974669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:30.974788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:30.974806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:30.974816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:30.974825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:30.974844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:30.984714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:30.984829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:30.984848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:30.984858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:30.984866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:30.984886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:30.994745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:30.994859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:30.994878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:30.994890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:30.994899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:30.994918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.004774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.004887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.004906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:31.004916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:31.004924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:31.004942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.014822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.014957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.014975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:31.014985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:31.014994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:31.015013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.024813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.024928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.024947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:31.024957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:31.024965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:31.024983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.034908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.035066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.035086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:31.035096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:31.035104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:31.035124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.044882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.045000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.045019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:31.045029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:31.045038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:31.045056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.054912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.055025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.055043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:31.055053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:31.055062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:31.055080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.064941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.065056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.065074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:31.065084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:31.065093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:31.065111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.074967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.075085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.075103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.658 [2024-05-15 12:30:31.075113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.658 [2024-05-15 12:30:31.075121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.658 [2024-05-15 12:30:31.075140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.658 qpair failed and we were unable to recover it. 00:29:02.658 [2024-05-15 12:30:31.085035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.658 [2024-05-15 12:30:31.085185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.658 [2024-05-15 12:30:31.085222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.085233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.085242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.085261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.095020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.095139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.095157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.095167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.095176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.095200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.105050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.105165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.105183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.105199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.105208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.105226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.115090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.115208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.115227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.115237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.115245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.115265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.125094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.125382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.125401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.125410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.125419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.125441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.135147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.135269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.135287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.135297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.135306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.135324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.145168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.145288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.145307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.145317] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.145325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.145343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.155203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.155316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.155334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.155344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.155353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.155372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.165205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.165321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.165339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.165349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.165357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.165375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.175295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.175447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.175469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.175479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.175487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.175507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.659 [2024-05-15 12:30:31.185280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.659 [2024-05-15 12:30:31.185400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.659 [2024-05-15 12:30:31.185418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.659 [2024-05-15 12:30:31.185428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.659 [2024-05-15 12:30:31.185436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.659 [2024-05-15 12:30:31.185455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.659 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.195317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.919 [2024-05-15 12:30:31.195438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.919 [2024-05-15 12:30:31.195457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.919 [2024-05-15 12:30:31.195467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.919 [2024-05-15 12:30:31.195476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.919 [2024-05-15 12:30:31.195494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.919 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.205343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.919 [2024-05-15 12:30:31.205464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.919 [2024-05-15 12:30:31.205482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.919 [2024-05-15 12:30:31.205492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.919 [2024-05-15 12:30:31.205500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.919 [2024-05-15 12:30:31.205519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.919 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.215346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.919 [2024-05-15 12:30:31.215466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.919 [2024-05-15 12:30:31.215485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.919 [2024-05-15 12:30:31.215495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.919 [2024-05-15 12:30:31.215507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.919 [2024-05-15 12:30:31.215526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.919 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.225372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.919 [2024-05-15 12:30:31.225493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.919 [2024-05-15 12:30:31.225512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.919 [2024-05-15 12:30:31.225521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.919 [2024-05-15 12:30:31.225530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.919 [2024-05-15 12:30:31.225548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.919 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.235438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.919 [2024-05-15 12:30:31.235554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.919 [2024-05-15 12:30:31.235572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.919 [2024-05-15 12:30:31.235582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.919 [2024-05-15 12:30:31.235591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.919 [2024-05-15 12:30:31.235609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.919 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.245393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.919 [2024-05-15 12:30:31.245509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.919 [2024-05-15 12:30:31.245528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.919 [2024-05-15 12:30:31.245538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.919 [2024-05-15 12:30:31.245546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.919 [2024-05-15 12:30:31.245564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.919 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.255479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.919 [2024-05-15 12:30:31.255597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.919 [2024-05-15 12:30:31.255615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.919 [2024-05-15 12:30:31.255625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.919 [2024-05-15 12:30:31.255633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.919 [2024-05-15 12:30:31.255652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.919 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.265498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.919 [2024-05-15 12:30:31.265615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.919 [2024-05-15 12:30:31.265633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.919 [2024-05-15 12:30:31.265643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.919 [2024-05-15 12:30:31.265652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.919 [2024-05-15 12:30:31.265670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.919 qpair failed and we were unable to recover it. 00:29:02.919 [2024-05-15 12:30:31.275523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.275635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.275654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.275663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.275672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.275690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.285542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.285655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.285674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.285683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.285692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.285710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.295581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.295708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.295727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.295737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.295745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.295763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.305525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.305634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.305653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.305663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.305676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.305694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.315658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.315798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.315816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.315826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.315834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.315853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.325652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.325768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.325787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.325797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.325805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.325823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.335683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.335973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.335992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.336002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.336010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.336029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.345718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.346006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.346024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.346034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.346042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.346060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.355740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.355850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.355868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.355878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.355887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.355905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.365771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.365917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.920 [2024-05-15 12:30:31.365935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.920 [2024-05-15 12:30:31.365945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.920 [2024-05-15 12:30:31.365954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.920 [2024-05-15 12:30:31.365972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.920 qpair failed and we were unable to recover it. 00:29:02.920 [2024-05-15 12:30:31.375799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.920 [2024-05-15 12:30:31.375917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.921 [2024-05-15 12:30:31.375936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.921 [2024-05-15 12:30:31.375946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.921 [2024-05-15 12:30:31.375954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.921 [2024-05-15 12:30:31.375974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.921 qpair failed and we were unable to recover it. 00:29:02.921 [2024-05-15 12:30:31.385829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.921 [2024-05-15 12:30:31.385939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.921 [2024-05-15 12:30:31.385957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.921 [2024-05-15 12:30:31.385967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.921 [2024-05-15 12:30:31.385975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.921 [2024-05-15 12:30:31.385994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.921 qpair failed and we were unable to recover it. 00:29:02.921 [2024-05-15 12:30:31.395875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.921 [2024-05-15 12:30:31.396003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.921 [2024-05-15 12:30:31.396021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.921 [2024-05-15 12:30:31.396034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.921 [2024-05-15 12:30:31.396043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.921 [2024-05-15 12:30:31.396061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.921 qpair failed and we were unable to recover it. 00:29:02.921 [2024-05-15 12:30:31.405884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.921 [2024-05-15 12:30:31.406003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.921 [2024-05-15 12:30:31.406021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.921 [2024-05-15 12:30:31.406031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.921 [2024-05-15 12:30:31.406040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.921 [2024-05-15 12:30:31.406058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.921 qpair failed and we were unable to recover it. 00:29:02.921 [2024-05-15 12:30:31.415922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.921 [2024-05-15 12:30:31.416038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.921 [2024-05-15 12:30:31.416056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.921 [2024-05-15 12:30:31.416066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.921 [2024-05-15 12:30:31.416075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.921 [2024-05-15 12:30:31.416093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.921 qpair failed and we were unable to recover it. 00:29:02.921 [2024-05-15 12:30:31.425962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.921 [2024-05-15 12:30:31.426094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.921 [2024-05-15 12:30:31.426112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.921 [2024-05-15 12:30:31.426122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.921 [2024-05-15 12:30:31.426130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.921 [2024-05-15 12:30:31.426148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.921 qpair failed and we were unable to recover it. 00:29:02.921 [2024-05-15 12:30:31.435993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.921 [2024-05-15 12:30:31.436106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.921 [2024-05-15 12:30:31.436124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.921 [2024-05-15 12:30:31.436134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.921 [2024-05-15 12:30:31.436142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.921 [2024-05-15 12:30:31.436161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.921 qpair failed and we were unable to recover it. 00:29:02.921 [2024-05-15 12:30:31.446019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.921 [2024-05-15 12:30:31.446140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.921 [2024-05-15 12:30:31.446158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.921 [2024-05-15 12:30:31.446168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.921 [2024-05-15 12:30:31.446177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:02.921 [2024-05-15 12:30:31.446201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.921 qpair failed and we were unable to recover it. 00:29:03.179 [2024-05-15 12:30:31.456061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.179 [2024-05-15 12:30:31.456182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.179 [2024-05-15 12:30:31.456206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.179 [2024-05-15 12:30:31.456216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.179 [2024-05-15 12:30:31.456224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.179 [2024-05-15 12:30:31.456243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.179 qpair failed and we were unable to recover it. 00:29:03.179 [2024-05-15 12:30:31.465997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.179 [2024-05-15 12:30:31.466128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.179 [2024-05-15 12:30:31.466147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.179 [2024-05-15 12:30:31.466157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.179 [2024-05-15 12:30:31.466165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.179 [2024-05-15 12:30:31.466183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.179 qpair failed and we were unable to recover it. 00:29:03.179 [2024-05-15 12:30:31.476099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.179 [2024-05-15 12:30:31.476219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.179 [2024-05-15 12:30:31.476238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.179 [2024-05-15 12:30:31.476247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.179 [2024-05-15 12:30:31.476256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.179 [2024-05-15 12:30:31.476275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.179 qpair failed and we were unable to recover it. 00:29:03.179 [2024-05-15 12:30:31.486145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.179 [2024-05-15 12:30:31.486264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.179 [2024-05-15 12:30:31.486285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.179 [2024-05-15 12:30:31.486295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.179 [2024-05-15 12:30:31.486304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.179 [2024-05-15 12:30:31.486322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.179 qpair failed and we were unable to recover it. 00:29:03.179 [2024-05-15 12:30:31.496100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.179 [2024-05-15 12:30:31.496219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.179 [2024-05-15 12:30:31.496237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.179 [2024-05-15 12:30:31.496247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.179 [2024-05-15 12:30:31.496256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.179 [2024-05-15 12:30:31.496275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.179 qpair failed and we were unable to recover it. 00:29:03.179 [2024-05-15 12:30:31.506206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.179 [2024-05-15 12:30:31.506335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.179 [2024-05-15 12:30:31.506354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.179 [2024-05-15 12:30:31.506364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.179 [2024-05-15 12:30:31.506372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.179 [2024-05-15 12:30:31.506392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.179 qpair failed and we were unable to recover it. 00:29:03.179 [2024-05-15 12:30:31.516223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.179 [2024-05-15 12:30:31.516338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.179 [2024-05-15 12:30:31.516356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.516366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.516374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.516393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.526235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.526349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.526367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.526377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.526386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.526407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.536274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.536393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.536412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.536422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.536430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.536449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.546314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.546426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.546445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.546455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.546463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.546482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.556350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.556637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.556655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.556664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.556673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.556691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.566340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.566460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.566478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.566488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.566496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.566514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.576364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.576477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.576499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.576509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.576518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.576536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.586449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.586600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.586618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.586628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.586637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.586655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.596488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.596627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.596646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.596656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.596664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.596682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.606456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.606582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.606602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.606613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.606621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.606640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.616501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.616663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.616681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.616691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.616699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.616722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.626557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.626677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.626696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.626706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.626715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.626733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.636538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.636648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.636666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.636676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.636684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.636704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.646568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.646684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.646703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.646714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.646723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.646742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.656620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.656739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.656758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.656768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.656777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.656795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.666638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.666755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.666775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.666785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.666793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.666812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.676668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.676781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.676799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.676809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.676818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.676836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.686746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.686880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.686898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.686908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.686917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.686936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.696729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.696848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.696867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.696876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.696885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.696904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.180 [2024-05-15 12:30:31.706762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.180 [2024-05-15 12:30:31.706879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.180 [2024-05-15 12:30:31.706898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.180 [2024-05-15 12:30:31.706908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.180 [2024-05-15 12:30:31.706919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.180 [2024-05-15 12:30:31.706938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.180 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.716815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.716937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.716956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.716966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.716974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.716993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.726805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.726921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.726940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.726949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.726958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.726976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.736869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.737022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.737041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.737051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.737060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.737079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.746845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.746960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.746978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.746988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.746996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.747015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.756900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.757011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.757030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.757039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.757048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.757067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.766887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.767003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.767021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.767031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.767040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.767058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.777019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.777146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.777165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.777175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.777183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.777209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.786947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.787060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.787079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.787088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.787097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.787115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.796961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.797114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.797132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.797146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.797155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.797173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.807046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.807204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.807223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.807233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.807242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.807260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.817109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.439 [2024-05-15 12:30:31.817229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.439 [2024-05-15 12:30:31.817248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.439 [2024-05-15 12:30:31.817258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.439 [2024-05-15 12:30:31.817267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.439 [2024-05-15 12:30:31.817286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.439 qpair failed and we were unable to recover it. 00:29:03.439 [2024-05-15 12:30:31.827096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.827214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.827233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.827243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.827252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.827271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.837065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.837177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.837204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.837214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.837223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.837242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.847086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.847216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.847235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.847245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.847253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.847272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.857230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.857352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.857371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.857381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.857389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.857408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.867141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.867268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.867287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.867298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.867306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.867325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.877160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.877290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.877312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.877324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.877332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.877352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.887210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.887326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.887348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.887359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.887367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.887387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.897225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.897346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.897365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.897375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.897384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.897403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.907301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.907426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.907445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.907455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.907464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.907483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.917321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.917437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.917456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.917466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.917475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.917495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.927386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.927502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.927520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.927530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.927539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.927557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.937346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.937498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.937516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.937526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.937535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.937553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.947450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.947564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.947583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.947592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.947601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.947619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.957507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.440 [2024-05-15 12:30:31.957620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.440 [2024-05-15 12:30:31.957638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.440 [2024-05-15 12:30:31.957648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.440 [2024-05-15 12:30:31.957656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.440 [2024-05-15 12:30:31.957675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.440 qpair failed and we were unable to recover it. 00:29:03.440 [2024-05-15 12:30:31.967449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:31.967575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:31.967593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:31.967603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:31.967612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:31.967630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:31.977534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:31.977656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:31.977679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:31.977689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:31.977697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:31.977716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:31.987488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:31.987605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:31.987624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:31.987633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:31.987642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:31.987661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:31.997616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:31.997729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:31.997747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:31.997758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:31.997766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:31.997784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:32.007601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:32.007720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:32.007738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:32.007748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:32.007757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:32.007775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:32.017640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:32.017755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:32.017773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:32.017783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:32.017792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:32.017813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:32.027676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:32.027807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:32.027826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:32.027836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:32.027844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:32.027863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:32.037619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:32.037730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:32.037749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:32.037759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:32.037768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:32.037786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:32.047773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:32.047921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:32.047939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:32.047949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:32.047957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:32.047975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:32.057766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:32.057887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:32.057906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:32.057915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:32.057924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:32.057942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:32.067710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:32.067827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:32.067849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:32.067859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.700 [2024-05-15 12:30:32.067868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.700 [2024-05-15 12:30:32.067886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.700 qpair failed and we were unable to recover it. 00:29:03.700 [2024-05-15 12:30:32.077994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.700 [2024-05-15 12:30:32.078116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.700 [2024-05-15 12:30:32.078135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.700 [2024-05-15 12:30:32.078145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.078153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.078172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.087788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.087902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.087920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.087930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.087938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.087957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.097845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.097959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.097977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.097987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.097995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.098014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.107884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.107995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.108014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.108024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.108035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.108055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.117848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.117960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.117979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.117989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.117997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.118015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.127900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.128016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.128035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.128044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.128053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.128071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.138027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.138157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.138175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.138185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.138201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.138220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.148005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.148121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.148139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.148149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.148157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.148176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.158057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.158198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.158217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.158227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.158236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.158254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.167998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.168130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.168149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.168158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.168167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.168186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.178076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.178201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.178220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.178229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.178238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.178257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.188047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.188173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.188197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.188208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.188216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.188235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.198068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.198181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.198206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.198219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.198228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.198246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.208106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.208230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.208249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.701 [2024-05-15 12:30:32.208258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.701 [2024-05-15 12:30:32.208267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.701 [2024-05-15 12:30:32.208285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.701 qpair failed and we were unable to recover it. 00:29:03.701 [2024-05-15 12:30:32.218213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.701 [2024-05-15 12:30:32.218331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.701 [2024-05-15 12:30:32.218350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.702 [2024-05-15 12:30:32.218360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.702 [2024-05-15 12:30:32.218368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.702 [2024-05-15 12:30:32.218386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.702 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.228254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.228396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.228415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.228425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.228433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.228451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.238274] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.238391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.238410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.238420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.238428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.238447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.248304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.248431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.248449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.248459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.248467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.248485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.258340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.258452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.258470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.258480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.258489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.258507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.268320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.268602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.268620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.268630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.268638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.268657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.278370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.278481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.278500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.278510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.278518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.278537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.288410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.288526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.288545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.288557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.288566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.288584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.298424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.298538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.298556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.298566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.298575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.298593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.308460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.308573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.308592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.308602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.308610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.308628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.318499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.318610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.318629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.318638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.318647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.318665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.328496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.328622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.328640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.328651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.328659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.961 [2024-05-15 12:30:32.328677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.961 qpair failed and we were unable to recover it. 00:29:03.961 [2024-05-15 12:30:32.338552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.961 [2024-05-15 12:30:32.338666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.961 [2024-05-15 12:30:32.338684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.961 [2024-05-15 12:30:32.338694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.961 [2024-05-15 12:30:32.338703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.338721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.348575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.348691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.348709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.348720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.348728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.348746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.358606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.358719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.358737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.358747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.358755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.358774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.368609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.368720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.368739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.368749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.368757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.368776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.378653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.378783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.378806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.378816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.378825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.378843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.388649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.388770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.388788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.388798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.388807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.388825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.398722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.398835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.398853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.398864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.398872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.398890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.408764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.408897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.408916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.408925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.408934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.408952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.418768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.418884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.418903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.418913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.418921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.418943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.428783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.428939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.428958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.428968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.428976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.428995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.438829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.438940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.438959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.438969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.438977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.438996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.448831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.449114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.449132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.449142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.449150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.449169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.458799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.458920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.458938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.458947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.458956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.458974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.468912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.469025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.469047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.962 [2024-05-15 12:30:32.469057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.962 [2024-05-15 12:30:32.469065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.962 [2024-05-15 12:30:32.469084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.962 qpair failed and we were unable to recover it. 00:29:03.962 [2024-05-15 12:30:32.478931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.962 [2024-05-15 12:30:32.479070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.962 [2024-05-15 12:30:32.479089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.963 [2024-05-15 12:30:32.479098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.963 [2024-05-15 12:30:32.479107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:03.963 [2024-05-15 12:30:32.479126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:03.963 qpair failed and we were unable to recover it. 00:29:03.963 [2024-05-15 12:30:32.488957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.963 [2024-05-15 12:30:32.489079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.220 [2024-05-15 12:30:32.489098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.220 [2024-05-15 12:30:32.489108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.220 [2024-05-15 12:30:32.489117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:04.220 [2024-05-15 12:30:32.489135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.220 qpair failed and we were unable to recover it. 00:29:04.220 [2024-05-15 12:30:32.499038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.499159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.499178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.499188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.499203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:04.221 [2024-05-15 12:30:32.499221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.508991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.509107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.509126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.509136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.509147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:04.221 [2024-05-15 12:30:32.509165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.518970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.519089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.519107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.519117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.519125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:04.221 [2024-05-15 12:30:32.519144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.529066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.529181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.529207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.529217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.529226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:04.221 [2024-05-15 12:30:32.529245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.539083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.539206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.539225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.539235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.539245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:04.221 [2024-05-15 12:30:32.539265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.549128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.549244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.549262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.549272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.549280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba0000b90 00:29:04.221 [2024-05-15 12:30:32.549299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.549495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f140 is same with the state(5) to be set 00:29:04.221 [2024-05-15 12:30:32.559231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.559381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.559412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.559427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.559440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b98000b90 00:29:04.221 [2024-05-15 12:30:32.559467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.569220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.569369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.569400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.569415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.569428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:29:04.221 [2024-05-15 12:30:32.569457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.579219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.579343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.579363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.579374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.579382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5ba8000b90 00:29:04.221 [2024-05-15 12:30:32.579402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.589264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.589576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.589605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.589625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.589643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2211560 00:29:04.221 [2024-05-15 12:30:32.589678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.221 qpair failed and we were unable to recover it. 00:29:04.221 [2024-05-15 12:30:32.599260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.221 [2024-05-15 12:30:32.599549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.221 [2024-05-15 12:30:32.599573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.221 [2024-05-15 12:30:32.599587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.221 [2024-05-15 12:30:32.599600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2211560 00:29:04.222 [2024-05-15 12:30:32.599626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.222 qpair failed and we were unable to recover it. 00:29:04.222 [2024-05-15 12:30:32.609305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.222 [2024-05-15 12:30:32.609459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.222 [2024-05-15 12:30:32.609489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.222 [2024-05-15 12:30:32.609504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.222 [2024-05-15 12:30:32.609516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b98000b90 00:29:04.222 [2024-05-15 12:30:32.609544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.222 qpair failed and we were unable to recover it. 00:29:04.222 [2024-05-15 12:30:32.609857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221f140 (9): Bad file descriptor 00:29:04.222 Initializing NVMe Controllers 00:29:04.222 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:04.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:04.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:04.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:04.222 Initialization complete. Launching workers. 00:29:04.222 Starting thread on core 1 00:29:04.222 Starting thread on core 2 00:29:04.222 Starting thread on core 3 00:29:04.222 Starting thread on core 0 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:04.222 00:29:04.222 real 0m11.232s 00:29:04.222 user 0m20.232s 00:29:04.222 sys 0m4.898s 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.222 ************************************ 00:29:04.222 END TEST nvmf_target_disconnect_tc2 00:29:04.222 ************************************ 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:04.222 rmmod nvme_tcp 00:29:04.222 rmmod nvme_fabrics 00:29:04.222 rmmod nvme_keyring 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2300513 ']' 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2300513 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' -z 2300513 ']' 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # kill -0 2300513 00:29:04.222 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # uname 00:29:04.479 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:04.479 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2300513 00:29:04.479 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_4 00:29:04.479 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_4 = sudo ']' 00:29:04.479 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2300513' 00:29:04.479 killing process with pid 2300513 00:29:04.479 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # kill 2300513 00:29:04.479 [2024-05-15 12:30:32.802897] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:04.479 12:30:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # wait 2300513 00:29:04.738 12:30:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:04.738 12:30:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:04.738 12:30:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:04.738 12:30:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.738 12:30:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:04.738 12:30:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.738 12:30:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.738 12:30:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.639 12:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:06.639 00:29:06.639 real 0m20.537s 00:29:06.639 user 0m47.286s 00:29:06.639 sys 0m10.367s 00:29:06.639 12:30:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:06.639 12:30:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.639 ************************************ 00:29:06.639 END TEST nvmf_target_disconnect 00:29:06.639 ************************************ 00:29:06.639 12:30:35 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:06.639 12:30:35 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:06.639 12:30:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 12:30:35 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:06.898 00:29:06.898 real 22m17.307s 00:29:06.898 user 45m58.474s 00:29:06.898 sys 8m6.669s 00:29:06.898 12:30:35 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:06.898 12:30:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 ************************************ 00:29:06.898 END TEST nvmf_tcp 00:29:06.898 ************************************ 00:29:06.898 12:30:35 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:29:06.898 12:30:35 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:06.898 12:30:35 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:06.898 12:30:35 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:06.898 12:30:35 -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 ************************************ 00:29:06.898 START TEST spdkcli_nvmf_tcp 00:29:06.898 ************************************ 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:06.898 * Looking for test storage... 00:29:06.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2302247 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2302247 00:29:06.898 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 2302247 ']' 00:29:06.899 12:30:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:06.899 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.899 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:06.899 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.899 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:06.899 12:30:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.158 [2024-05-15 12:30:35.461865] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:29:07.158 [2024-05-15 12:30:35.461918] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302247 ] 00:29:07.158 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.158 [2024-05-15 12:30:35.531382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:07.158 [2024-05-15 12:30:35.606275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.158 [2024-05-15 12:30:35.606279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.730 12:30:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:07.730 12:30:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:29:07.730 12:30:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:07.989 12:30:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:07.989 12:30:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.989 12:30:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:07.989 12:30:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:07.989 12:30:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:07.989 12:30:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:07.989 12:30:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.989 12:30:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:07.989 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:07.989 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:07.989 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:07.989 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:07.989 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:07.989 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:07.990 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:07.990 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:07.990 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:07.990 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:07.990 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:07.990 ' 00:29:10.521 [2024-05-15 12:30:38.677933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.457 [2024-05-15 12:30:39.853443] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:11.457 [2024-05-15 12:30:39.853892] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:13.987 [2024-05-15 12:30:42.016517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:15.360 [2024-05-15 12:30:43.874217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:17.261 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:17.261 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:17.261 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:17.261 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:17.261 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:17.261 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:17.261 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:17.261 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:17.261 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:17.261 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:17.261 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:17.261 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:17.261 12:30:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:17.261 12:30:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:17.261 12:30:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.261 12:30:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:17.261 12:30:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:17.261 12:30:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.261 12:30:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:17.261 12:30:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.520 12:30:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:17.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:17.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:17.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:17.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:17.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:17.520 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:17.520 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:17.520 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:17.520 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:17.520 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:17.520 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:17.520 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:17.520 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:17.520 ' 00:29:22.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:22.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:22.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:22.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:22.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:22.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:22.879 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:22.879 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:22.879 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:22.879 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:22.879 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:22.879 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:22.879 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:22.879 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2302247 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2302247 ']' 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2302247 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2302247 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2302247' 00:29:22.879 killing process with pid 2302247 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 2302247 00:29:22.879 [2024-05-15 12:30:50.993977] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:22.879 12:30:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 2302247 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2302247 ']' 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2302247 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2302247 ']' 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2302247 00:29:22.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2302247) - No such process 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 2302247 is not found' 00:29:22.879 Process with pid 2302247 is not found 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:22.879 00:29:22.879 real 0m15.930s 00:29:22.879 user 0m32.837s 00:29:22.879 sys 0m0.879s 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:22.879 12:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:22.879 ************************************ 00:29:22.879 END TEST spdkcli_nvmf_tcp 00:29:22.879 ************************************ 00:29:22.879 12:30:51 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:22.879 12:30:51 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:22.879 12:30:51 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:22.879 12:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:22.879 ************************************ 00:29:22.879 START TEST nvmf_identify_passthru 00:29:22.879 ************************************ 00:29:22.880 12:30:51 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:22.880 * Looking for test storage... 00:29:22.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:22.880 12:30:51 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.880 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.138 12:30:51 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.138 12:30:51 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.138 12:30:51 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.138 12:30:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.138 12:30:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.138 12:30:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.138 12:30:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:23.138 12:30:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.138 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.139 12:30:51 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.139 12:30:51 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.139 12:30:51 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.139 12:30:51 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.139 12:30:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.139 12:30:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.139 12:30:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.139 12:30:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:23.139 12:30:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.139 12:30:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.139 12:30:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:23.139 12:30:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:23.139 12:30:51 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:23.139 12:30:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:29.700 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:29.700 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:29.700 Found net devices under 0000:af:00.0: cvl_0_0 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:29.700 Found net devices under 0000:af:00.1: cvl_0_1 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:29.700 12:30:57 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.700 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.700 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.700 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:29.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:29:29.700 00:29:29.700 --- 10.0.0.2 ping statistics --- 00:29:29.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.700 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:29:29.700 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:29:29.700 00:29:29.700 --- 10.0.0.1 ping statistics --- 00:29:29.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.701 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:29.701 12:30:58 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:29.701 12:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:29.701 12:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:29.701 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:29:29.959 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:29:29.959 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:d8:00.0 00:29:29.959 12:30:58 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:d8:00.0 00:29:29.959 12:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:29:29.959 12:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:29:29.959 12:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:29.959 12:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:29.959 12:30:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:29.959 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.224 12:31:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:29:35.224 12:31:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:35.224 12:31:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:35.224 12:31:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:35.224 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.405 12:31:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:39.405 12:31:07 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.405 12:31:07 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.405 12:31:07 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:39.405 12:31:07 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2309738 00:29:39.405 12:31:07 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:39.405 12:31:07 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2309738 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 2309738 ']' 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:39.405 12:31:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:39.405 [2024-05-15 12:31:07.896137] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:29:39.405 [2024-05-15 12:31:07.896185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.405 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.664 [2024-05-15 12:31:07.969738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:39.664 [2024-05-15 12:31:08.045991] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.664 [2024-05-15 12:31:08.046028] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.664 [2024-05-15 12:31:08.046042] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.664 [2024-05-15 12:31:08.046052] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.664 [2024-05-15 12:31:08.046062] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.664 [2024-05-15 12:31:08.046120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.664 [2024-05-15 12:31:08.046222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.664 [2024-05-15 12:31:08.046256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.664 [2024-05-15 12:31:08.046259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.231 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:40.231 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:29:40.231 12:31:08 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:40.231 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.231 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.231 INFO: Log level set to 20 00:29:40.231 INFO: Requests: 00:29:40.231 { 00:29:40.231 "jsonrpc": "2.0", 00:29:40.231 "method": "nvmf_set_config", 00:29:40.231 "id": 1, 00:29:40.231 "params": { 00:29:40.231 "admin_cmd_passthru": { 00:29:40.231 "identify_ctrlr": true 00:29:40.231 } 00:29:40.231 } 00:29:40.231 } 00:29:40.231 00:29:40.231 INFO: response: 00:29:40.231 { 00:29:40.231 "jsonrpc": "2.0", 00:29:40.231 "id": 1, 00:29:40.231 "result": true 00:29:40.231 } 00:29:40.231 00:29:40.231 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.231 12:31:08 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:40.231 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.231 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.231 INFO: Setting log level to 20 00:29:40.231 INFO: Setting log level to 20 00:29:40.231 INFO: Log level set to 20 00:29:40.231 INFO: Log level set to 20 00:29:40.231 INFO: Requests: 00:29:40.231 { 00:29:40.231 "jsonrpc": "2.0", 00:29:40.231 "method": "framework_start_init", 00:29:40.231 "id": 1 00:29:40.231 } 00:29:40.231 00:29:40.231 INFO: Requests: 00:29:40.231 { 00:29:40.231 "jsonrpc": "2.0", 00:29:40.231 "method": "framework_start_init", 00:29:40.231 "id": 1 00:29:40.231 } 00:29:40.231 00:29:40.490 [2024-05-15 12:31:08.815669] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:40.490 INFO: response: 00:29:40.490 { 00:29:40.490 "jsonrpc": "2.0", 00:29:40.490 "id": 1, 00:29:40.490 "result": true 00:29:40.490 } 00:29:40.490 00:29:40.490 INFO: response: 00:29:40.490 { 00:29:40.490 "jsonrpc": "2.0", 00:29:40.490 "id": 1, 00:29:40.490 "result": true 00:29:40.490 } 00:29:40.490 00:29:40.490 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.490 12:31:08 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:40.490 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.490 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.490 INFO: Setting log level to 40 00:29:40.490 INFO: Setting log level to 40 00:29:40.490 INFO: Setting log level to 40 00:29:40.490 [2024-05-15 12:31:08.828988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.490 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.490 12:31:08 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:40.490 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:40.490 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.490 12:31:08 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:29:40.490 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.490 12:31:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.776 Nvme0n1 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.776 [2024-05-15 12:31:11.756162] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:43.776 [2024-05-15 12:31:11.756475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.776 [ 00:29:43.776 { 00:29:43.776 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:43.776 "subtype": "Discovery", 00:29:43.776 "listen_addresses": [], 00:29:43.776 "allow_any_host": true, 00:29:43.776 "hosts": [] 00:29:43.776 }, 00:29:43.776 { 00:29:43.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.776 "subtype": "NVMe", 00:29:43.776 "listen_addresses": [ 00:29:43.776 { 00:29:43.776 "trtype": "TCP", 00:29:43.776 "adrfam": "IPv4", 00:29:43.776 "traddr": "10.0.0.2", 00:29:43.776 "trsvcid": "4420" 00:29:43.776 } 00:29:43.776 ], 00:29:43.776 "allow_any_host": true, 00:29:43.776 "hosts": [], 00:29:43.776 "serial_number": "SPDK00000000000001", 00:29:43.776 "model_number": "SPDK bdev Controller", 00:29:43.776 "max_namespaces": 1, 00:29:43.776 "min_cntlid": 1, 00:29:43.776 "max_cntlid": 65519, 00:29:43.776 "namespaces": [ 00:29:43.776 { 00:29:43.776 "nsid": 1, 00:29:43.776 "bdev_name": "Nvme0n1", 00:29:43.776 "name": "Nvme0n1", 00:29:43.776 "nguid": "265AE52E4D884FB09142FD010DBE1A5C", 00:29:43.776 "uuid": "265ae52e-4d88-4fb0-9142-fd010dbe1a5c" 00:29:43.776 } 00:29:43.776 ] 00:29:43.776 } 00:29:43.776 ] 00:29:43.776 12:31:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:43.776 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:43.776 12:31:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:43.776 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.776 12:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:43.776 12:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:29:43.776 12:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:43.776 12:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.776 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.776 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.776 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.776 12:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:43.776 12:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:43.776 rmmod nvme_tcp 00:29:43.776 rmmod nvme_fabrics 00:29:43.776 rmmod nvme_keyring 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2309738 ']' 00:29:43.776 12:31:12 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2309738 00:29:43.776 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 2309738 ']' 00:29:43.776 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 2309738 00:29:43.776 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:29:43.776 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:43.776 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2309738 00:29:43.777 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:43.777 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:43.777 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2309738' 00:29:43.777 killing process with pid 2309738 00:29:43.777 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 2309738 00:29:43.777 [2024-05-15 12:31:12.261789] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:43.777 12:31:12 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 2309738 00:29:46.320 12:31:14 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:46.320 12:31:14 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:46.320 12:31:14 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:46.320 12:31:14 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:46.320 12:31:14 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:46.320 12:31:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.320 12:31:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:46.320 12:31:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.229 12:31:16 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:48.229 00:29:48.229 real 0m25.136s 00:29:48.229 user 0m33.692s 00:29:48.229 sys 0m6.414s 00:29:48.229 12:31:16 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:48.229 12:31:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:48.229 ************************************ 00:29:48.229 END TEST nvmf_identify_passthru 00:29:48.229 ************************************ 00:29:48.229 12:31:16 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:48.229 12:31:16 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:48.229 12:31:16 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:48.229 12:31:16 -- common/autotest_common.sh@10 -- # set +x 00:29:48.229 ************************************ 00:29:48.229 START TEST nvmf_dif 00:29:48.229 ************************************ 00:29:48.229 12:31:16 nvmf_dif -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:48.229 * Looking for test storage... 00:29:48.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:48.229 12:31:16 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.229 12:31:16 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.229 12:31:16 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.229 12:31:16 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.229 12:31:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.229 12:31:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.229 12:31:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.229 12:31:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:48.229 12:31:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:48.229 12:31:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:48.229 12:31:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:48.229 12:31:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:48.229 12:31:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:48.229 12:31:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.229 12:31:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:48.229 12:31:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:48.229 12:31:16 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:48.229 12:31:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:54.792 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:54.792 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:54.792 Found net devices under 0000:af:00.0: cvl_0_0 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:54.792 Found net devices under 0000:af:00.1: cvl_0_1 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.792 12:31:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:54.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:29:54.792 00:29:54.792 --- 10.0.0.2 ping statistics --- 00:29:54.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.792 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:29:54.792 00:29:54.792 --- 10.0.0.1 ping statistics --- 00:29:54.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.792 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:54.792 12:31:23 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:58.080 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:58.080 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:58.080 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:58.080 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:58.080 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:58.081 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:58.081 12:31:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:58.081 12:31:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:58.081 12:31:26 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:58.081 12:31:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2315615 00:29:58.081 12:31:26 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2315615 00:29:58.081 12:31:26 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 2315615 ']' 00:29:58.081 12:31:26 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.081 12:31:26 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:58.081 12:31:26 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.081 12:31:26 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:58.081 12:31:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:58.081 [2024-05-15 12:31:26.459847] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:29:58.081 [2024-05-15 12:31:26.459891] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.081 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.081 [2024-05-15 12:31:26.533495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.081 [2024-05-15 12:31:26.608229] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.081 [2024-05-15 12:31:26.608263] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.081 [2024-05-15 12:31:26.608277] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.081 [2024-05-15 12:31:26.608287] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.081 [2024-05-15 12:31:26.608299] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.081 [2024-05-15 12:31:26.608332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:29:59.017 12:31:27 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.017 12:31:27 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.017 12:31:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:59.017 12:31:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.017 [2024-05-15 12:31:27.315186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:59.017 12:31:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:59.017 12:31:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:59.017 ************************************ 00:29:59.017 START TEST fio_dif_1_default 00:29:59.017 ************************************ 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.017 bdev_null0 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:59.017 [2024-05-15 12:31:27.399368] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:59.017 [2024-05-15 12:31:27.399579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:59.017 { 00:29:59.017 "params": { 00:29:59.017 "name": "Nvme$subsystem", 00:29:59.017 "trtype": "$TEST_TRANSPORT", 00:29:59.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.017 "adrfam": "ipv4", 00:29:59.017 "trsvcid": "$NVMF_PORT", 00:29:59.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.017 "hdgst": ${hdgst:-false}, 00:29:59.017 "ddgst": ${ddgst:-false} 00:29:59.017 }, 00:29:59.017 "method": "bdev_nvme_attach_controller" 00:29:59.017 } 00:29:59.017 EOF 00:29:59.017 )") 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:29:59.017 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:59.018 "params": { 00:29:59.018 "name": "Nvme0", 00:29:59.018 "trtype": "tcp", 00:29:59.018 "traddr": "10.0.0.2", 00:29:59.018 "adrfam": "ipv4", 00:29:59.018 "trsvcid": "4420", 00:29:59.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.018 "hdgst": false, 00:29:59.018 "ddgst": false 00:29:59.018 }, 00:29:59.018 "method": "bdev_nvme_attach_controller" 00:29:59.018 }' 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:59.018 12:31:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.276 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:59.276 fio-3.35 00:29:59.276 Starting 1 thread 00:29:59.535 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.740 00:30:11.740 filename0: (groupid=0, jobs=1): err= 0: pid=2316111: Wed May 15 12:31:38 2024 00:30:11.740 read: IOPS=181, BW=725KiB/s (742kB/s)(7264KiB/10025msec) 00:30:11.740 slat (nsec): min=5394, max=25453, avg=5949.63, stdev=1131.36 00:30:11.740 clat (usec): min=1508, max=43631, avg=22064.44, stdev=20411.61 00:30:11.740 lat (usec): min=1513, max=43656, avg=22070.39, stdev=20411.64 00:30:11.740 clat percentiles (usec): 00:30:11.740 | 1.00th=[ 1532], 5.00th=[ 1532], 10.00th=[ 1532], 20.00th=[ 1549], 00:30:11.740 | 30.00th=[ 1549], 40.00th=[ 1582], 50.00th=[41157], 60.00th=[42206], 00:30:11.740 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:30:11.740 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:30:11.740 | 99.99th=[43779] 00:30:11.740 bw ( KiB/s): min= 672, max= 768, per=99.92%, avg=724.80, stdev=31.62, samples=20 00:30:11.740 iops : min= 168, max= 192, avg=181.20, stdev= 7.90, samples=20 00:30:11.740 lat (msec) : 2=49.78%, 50=50.22% 00:30:11.740 cpu : usr=85.06%, sys=14.69%, ctx=16, majf=0, minf=216 00:30:11.740 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.740 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.740 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:11.740 00:30:11.741 Run status group 0 (all jobs): 00:30:11.741 READ: bw=725KiB/s (742kB/s), 725KiB/s-725KiB/s (742kB/s-742kB/s), io=7264KiB (7438kB), run=10025-10025msec 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 00:30:11.741 real 0m11.131s 00:30:11.741 user 0m17.313s 00:30:11.741 sys 0m1.802s 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 ************************************ 00:30:11.741 END TEST fio_dif_1_default 00:30:11.741 ************************************ 00:30:11.741 12:31:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:11.741 12:31:38 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:11.741 12:31:38 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 ************************************ 00:30:11.741 START TEST fio_dif_1_multi_subsystems 00:30:11.741 ************************************ 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 bdev_null0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 [2024-05-15 12:31:38.622838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 bdev_null1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:11.741 { 00:30:11.741 "params": { 00:30:11.741 "name": "Nvme$subsystem", 00:30:11.741 "trtype": "$TEST_TRANSPORT", 00:30:11.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.741 "adrfam": "ipv4", 00:30:11.741 "trsvcid": "$NVMF_PORT", 00:30:11.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.741 "hdgst": ${hdgst:-false}, 00:30:11.741 "ddgst": ${ddgst:-false} 00:30:11.741 }, 00:30:11.741 "method": "bdev_nvme_attach_controller" 00:30:11.741 } 00:30:11.741 EOF 00:30:11.741 )") 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:11.741 { 00:30:11.741 "params": { 00:30:11.741 "name": "Nvme$subsystem", 00:30:11.741 "trtype": "$TEST_TRANSPORT", 00:30:11.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.741 "adrfam": "ipv4", 00:30:11.741 "trsvcid": "$NVMF_PORT", 00:30:11.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.741 "hdgst": ${hdgst:-false}, 00:30:11.741 "ddgst": ${ddgst:-false} 00:30:11.741 }, 00:30:11.741 "method": "bdev_nvme_attach_controller" 00:30:11.741 } 00:30:11.741 EOF 00:30:11.741 )") 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:11.741 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:11.741 "params": { 00:30:11.741 "name": "Nvme0", 00:30:11.742 "trtype": "tcp", 00:30:11.742 "traddr": "10.0.0.2", 00:30:11.742 "adrfam": "ipv4", 00:30:11.742 "trsvcid": "4420", 00:30:11.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.742 "hdgst": false, 00:30:11.742 "ddgst": false 00:30:11.742 }, 00:30:11.742 "method": "bdev_nvme_attach_controller" 00:30:11.742 },{ 00:30:11.742 "params": { 00:30:11.742 "name": "Nvme1", 00:30:11.742 "trtype": "tcp", 00:30:11.742 "traddr": "10.0.0.2", 00:30:11.742 "adrfam": "ipv4", 00:30:11.742 "trsvcid": "4420", 00:30:11.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.742 "hdgst": false, 00:30:11.742 "ddgst": false 00:30:11.742 }, 00:30:11.742 "method": "bdev_nvme_attach_controller" 00:30:11.742 }' 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:11.742 12:31:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.742 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:11.742 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:11.742 fio-3.35 00:30:11.742 Starting 2 threads 00:30:11.742 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.764 00:30:21.764 filename0: (groupid=0, jobs=1): err= 0: pid=2318120: Wed May 15 12:31:49 2024 00:30:21.764 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10030msec) 00:30:21.764 slat (nsec): min=3889, max=14829, avg=7304.78, stdev=2218.59 00:30:21.764 clat (usec): min=40896, max=44341, avg=41422.41, stdev=526.15 00:30:21.764 lat (usec): min=40902, max=44354, avg=41429.72, stdev=526.16 00:30:21.764 clat percentiles (usec): 00:30:21.764 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:21.764 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:30:21.764 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:21.764 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:30:21.764 | 99.99th=[44303] 00:30:21.764 bw ( KiB/s): min= 352, max= 416, per=50.28%, avg=385.60, stdev=12.61, samples=20 00:30:21.764 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:30:21.764 lat (msec) : 50=100.00% 00:30:21.764 cpu : usr=93.04%, sys=6.71%, ctx=13, majf=0, minf=79 00:30:21.764 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.764 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.764 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:21.764 filename1: (groupid=0, jobs=1): err= 0: pid=2318121: Wed May 15 12:31:49 2024 00:30:21.764 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10005msec) 00:30:21.764 slat (nsec): min=5786, max=31138, avg=7385.78, stdev=2523.21 00:30:21.764 clat (usec): min=41772, max=44089, avg=42012.28, stdev=212.46 00:30:21.764 lat (usec): min=41778, max=44115, avg=42019.67, stdev=212.78 00:30:21.764 clat percentiles (usec): 00:30:21.764 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:30:21.764 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:21.764 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:21.764 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:30:21.764 | 99.99th=[44303] 00:30:21.764 bw ( KiB/s): min= 352, max= 384, per=49.50%, avg=379.20, stdev=11.72, samples=20 00:30:21.764 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:30:21.764 lat (msec) : 50=100.00% 00:30:21.764 cpu : usr=93.37%, sys=6.37%, ctx=9, majf=0, minf=161 00:30:21.764 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.764 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.764 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:21.764 00:30:21.764 Run status group 0 (all jobs): 00:30:21.764 READ: bw=766KiB/s (784kB/s), 381KiB/s-386KiB/s (390kB/s-395kB/s), io=7680KiB (7864kB), run=10005-10030msec 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.764 00:30:21.764 real 0m11.245s 00:30:21.764 user 0m27.673s 00:30:21.764 sys 0m1.690s 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:21.764 12:31:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.764 ************************************ 00:30:21.764 END TEST fio_dif_1_multi_subsystems 00:30:21.764 ************************************ 00:30:21.764 12:31:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:21.764 12:31:49 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:21.764 12:31:49 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:21.764 12:31:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.764 ************************************ 00:30:21.765 START TEST fio_dif_rand_params 00:30:21.765 ************************************ 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.765 bdev_null0 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.765 [2024-05-15 12:31:49.953111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:21.765 { 00:30:21.765 "params": { 00:30:21.765 "name": "Nvme$subsystem", 00:30:21.765 "trtype": "$TEST_TRANSPORT", 00:30:21.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.765 "adrfam": "ipv4", 00:30:21.765 "trsvcid": "$NVMF_PORT", 00:30:21.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.765 "hdgst": ${hdgst:-false}, 00:30:21.765 "ddgst": ${ddgst:-false} 00:30:21.765 }, 00:30:21.765 "method": "bdev_nvme_attach_controller" 00:30:21.765 } 00:30:21.765 EOF 00:30:21.765 )") 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:21.765 12:31:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:21.765 "params": { 00:30:21.765 "name": "Nvme0", 00:30:21.765 "trtype": "tcp", 00:30:21.765 "traddr": "10.0.0.2", 00:30:21.765 "adrfam": "ipv4", 00:30:21.765 "trsvcid": "4420", 00:30:21.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.765 "hdgst": false, 00:30:21.765 "ddgst": false 00:30:21.765 }, 00:30:21.765 "method": "bdev_nvme_attach_controller" 00:30:21.765 }' 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:21.765 12:31:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.026 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:22.026 ... 00:30:22.026 fio-3.35 00:30:22.026 Starting 3 threads 00:30:22.026 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.575 00:30:28.575 filename0: (groupid=0, jobs=1): err= 0: pid=2320142: Wed May 15 12:31:55 2024 00:30:28.575 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(162MiB/5001msec) 00:30:28.575 slat (nsec): min=6070, max=42430, avg=14046.25, stdev=7732.87 00:30:28.575 clat (usec): min=4505, max=96613, avg=11537.84, stdev=12794.78 00:30:28.575 lat (usec): min=4515, max=96640, avg=11551.89, stdev=12795.37 00:30:28.575 clat percentiles (usec): 00:30:28.575 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5407], 20.00th=[ 5997], 00:30:28.575 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7504], 60.00th=[ 8225], 00:30:28.575 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[12911], 95.00th=[50070], 00:30:28.575 | 99.00th=[55313], 99.50th=[55837], 99.90th=[92799], 99.95th=[96994], 00:30:28.575 | 99.99th=[96994] 00:30:28.575 bw ( KiB/s): min=24064, max=41472, per=33.83%, avg=33080.89, stdev=5666.48, samples=9 00:30:28.575 iops : min= 188, max= 324, avg=258.44, stdev=44.27, samples=9 00:30:28.575 lat (msec) : 10=79.20%, 20=11.79%, 50=4.16%, 100=4.85% 00:30:28.575 cpu : usr=94.68%, sys=4.88%, ctx=29, majf=0, minf=56 00:30:28.575 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.575 issued rwts: total=1298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.575 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.575 filename0: (groupid=0, jobs=1): err= 0: pid=2320143: Wed May 15 12:31:55 2024 00:30:28.575 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(158MiB/5043msec) 00:30:28.575 slat (nsec): min=5926, max=58511, avg=11105.99, stdev=5138.82 00:30:28.575 clat (usec): min=4487, max=95859, avg=11985.31, stdev=13711.56 00:30:28.575 lat (usec): min=4505, max=95872, avg=11996.42, stdev=13712.04 00:30:28.575 clat percentiles (usec): 00:30:28.575 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5538], 20.00th=[ 6259], 00:30:28.575 | 30.00th=[ 6587], 40.00th=[ 7046], 50.00th=[ 7439], 60.00th=[ 8160], 00:30:28.575 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[15008], 95.00th=[50070], 00:30:28.575 | 99.00th=[55837], 99.50th=[89654], 99.90th=[95945], 99.95th=[95945], 00:30:28.575 | 99.99th=[95945] 00:30:28.575 bw ( KiB/s): min=17408, max=41728, per=32.91%, avg=32179.20, stdev=7697.10, samples=10 00:30:28.575 iops : min= 136, max= 326, avg=251.40, stdev=60.13, samples=10 00:30:28.575 lat (msec) : 10=78.10%, 20=12.70%, 50=3.73%, 100=5.48% 00:30:28.575 cpu : usr=94.51%, sys=5.02%, ctx=13, majf=0, minf=114 00:30:28.575 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.575 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.575 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.575 filename0: (groupid=0, jobs=1): err= 0: pid=2320145: Wed May 15 12:31:55 2024 00:30:28.575 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5010msec) 00:30:28.575 slat (nsec): min=5972, max=54945, avg=11031.03, stdev=5040.10 00:30:28.575 clat (usec): min=4330, max=96610, avg=11600.93, stdev=13112.40 00:30:28.575 lat (usec): min=4337, max=96621, avg=11611.96, stdev=13113.01 00:30:28.575 clat percentiles (usec): 00:30:28.575 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5997], 00:30:28.575 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7504], 60.00th=[ 8094], 00:30:28.575 | 70.00th=[ 8979], 80.00th=[ 9896], 90.00th=[13960], 95.00th=[50070], 00:30:28.575 | 99.00th=[55313], 99.50th=[56361], 99.90th=[93848], 99.95th=[96994], 00:30:28.575 | 99.99th=[96994] 00:30:28.575 bw ( KiB/s): min=21248, max=45056, per=33.80%, avg=33049.60, stdev=8171.04, samples=10 00:30:28.575 iops : min= 166, max= 352, avg=258.20, stdev=63.84, samples=10 00:30:28.575 lat (msec) : 10=81.30%, 20=9.89%, 50=3.17%, 100=5.64% 00:30:28.575 cpu : usr=94.41%, sys=5.07%, ctx=12, majf=0, minf=156 00:30:28.575 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.575 issued rwts: total=1294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.575 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.575 00:30:28.575 Run status group 0 (all jobs): 00:30:28.575 READ: bw=95.5MiB/s (100MB/s), 31.2MiB/s-32.4MiB/s (32.7MB/s-34.0MB/s), io=482MiB (505MB), run=5001-5043msec 00:30:28.575 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:28.575 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:28.575 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.575 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:28.575 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 bdev_null0 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 [2024-05-15 12:31:56.277992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 bdev_null1 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 bdev_null2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.576 { 00:30:28.576 "params": { 00:30:28.576 "name": "Nvme$subsystem", 00:30:28.576 "trtype": "$TEST_TRANSPORT", 00:30:28.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.576 "adrfam": "ipv4", 00:30:28.576 "trsvcid": "$NVMF_PORT", 00:30:28.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.576 "hdgst": ${hdgst:-false}, 00:30:28.576 "ddgst": ${ddgst:-false} 00:30:28.576 }, 00:30:28.576 "method": "bdev_nvme_attach_controller" 00:30:28.576 } 00:30:28.576 EOF 00:30:28.576 )") 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.576 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.576 { 00:30:28.576 "params": { 00:30:28.576 "name": "Nvme$subsystem", 00:30:28.576 "trtype": "$TEST_TRANSPORT", 00:30:28.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.576 "adrfam": "ipv4", 00:30:28.576 "trsvcid": "$NVMF_PORT", 00:30:28.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.576 "hdgst": ${hdgst:-false}, 00:30:28.576 "ddgst": ${ddgst:-false} 00:30:28.576 }, 00:30:28.576 "method": "bdev_nvme_attach_controller" 00:30:28.577 } 00:30:28.577 EOF 00:30:28.577 )") 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.577 { 00:30:28.577 "params": { 00:30:28.577 "name": "Nvme$subsystem", 00:30:28.577 "trtype": "$TEST_TRANSPORT", 00:30:28.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.577 "adrfam": "ipv4", 00:30:28.577 "trsvcid": "$NVMF_PORT", 00:30:28.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.577 "hdgst": ${hdgst:-false}, 00:30:28.577 "ddgst": ${ddgst:-false} 00:30:28.577 }, 00:30:28.577 "method": "bdev_nvme_attach_controller" 00:30:28.577 } 00:30:28.577 EOF 00:30:28.577 )") 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:28.577 "params": { 00:30:28.577 "name": "Nvme0", 00:30:28.577 "trtype": "tcp", 00:30:28.577 "traddr": "10.0.0.2", 00:30:28.577 "adrfam": "ipv4", 00:30:28.577 "trsvcid": "4420", 00:30:28.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.577 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:28.577 "hdgst": false, 00:30:28.577 "ddgst": false 00:30:28.577 }, 00:30:28.577 "method": "bdev_nvme_attach_controller" 00:30:28.577 },{ 00:30:28.577 "params": { 00:30:28.577 "name": "Nvme1", 00:30:28.577 "trtype": "tcp", 00:30:28.577 "traddr": "10.0.0.2", 00:30:28.577 "adrfam": "ipv4", 00:30:28.577 "trsvcid": "4420", 00:30:28.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.577 "hdgst": false, 00:30:28.577 "ddgst": false 00:30:28.577 }, 00:30:28.577 "method": "bdev_nvme_attach_controller" 00:30:28.577 },{ 00:30:28.577 "params": { 00:30:28.577 "name": "Nvme2", 00:30:28.577 "trtype": "tcp", 00:30:28.577 "traddr": "10.0.0.2", 00:30:28.577 "adrfam": "ipv4", 00:30:28.577 "trsvcid": "4420", 00:30:28.577 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:28.577 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:28.577 "hdgst": false, 00:30:28.577 "ddgst": false 00:30:28.577 }, 00:30:28.577 "method": "bdev_nvme_attach_controller" 00:30:28.577 }' 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:28.577 12:31:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.577 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.577 ... 00:30:28.577 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.577 ... 00:30:28.577 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.577 ... 00:30:28.577 fio-3.35 00:30:28.577 Starting 24 threads 00:30:28.577 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.771 00:30:40.771 filename0: (groupid=0, jobs=1): err= 0: pid=2321447: Wed May 15 12:32:07 2024 00:30:40.771 read: IOPS=619, BW=2477KiB/s (2536kB/s)(24.2MiB/10005msec) 00:30:40.771 slat (nsec): min=3018, max=64342, avg=11299.21, stdev=5834.53 00:30:40.771 clat (usec): min=3702, max=51628, avg=25774.41, stdev=5218.49 00:30:40.771 lat (usec): min=3714, max=51648, avg=25785.71, stdev=5219.26 00:30:40.771 clat percentiles (usec): 00:30:40.772 | 1.00th=[ 9372], 5.00th=[17695], 10.00th=[21890], 20.00th=[23987], 00:30:40.772 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:40.772 | 70.00th=[26084], 80.00th=[27395], 90.00th=[31589], 95.00th=[35390], 00:30:40.772 | 99.00th=[41681], 99.50th=[46400], 99.90th=[51119], 99.95th=[51643], 00:30:40.772 | 99.99th=[51643] 00:30:40.772 bw ( KiB/s): min= 2256, max= 2952, per=4.27%, avg=2475.60, stdev=156.28, samples=20 00:30:40.772 iops : min= 564, max= 738, avg=618.90, stdev=39.07, samples=20 00:30:40.772 lat (msec) : 4=0.26%, 10=0.89%, 20=6.84%, 50=91.75%, 100=0.26% 00:30:40.772 cpu : usr=97.37%, sys=2.21%, ctx=19, majf=0, minf=64 00:30:40.772 IO depths : 1=0.6%, 2=1.4%, 4=7.0%, 8=77.0%, 16=14.0%, 32=0.0%, >=64=0.0% 00:30:40.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 complete : 0=0.0%, 4=90.0%, 8=6.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 issued rwts: total=6195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.772 filename0: (groupid=0, jobs=1): err= 0: pid=2321448: Wed May 15 12:32:07 2024 00:30:40.772 read: IOPS=693, BW=2774KiB/s (2840kB/s)(27.1MiB/10022msec) 00:30:40.772 slat (nsec): min=3977, max=64976, avg=10689.95, stdev=5712.83 00:30:40.772 clat (usec): min=3803, max=45564, avg=22986.10, stdev=6343.84 00:30:40.772 lat (usec): min=3840, max=45601, avg=22996.79, stdev=6345.83 00:30:40.772 clat percentiles (usec): 00:30:40.772 | 1.00th=[10159], 5.00th=[13698], 10.00th=[15008], 20.00th=[16909], 00:30:40.772 | 30.00th=[19006], 40.00th=[22676], 50.00th=[24249], 60.00th=[24773], 00:30:40.772 | 70.00th=[25297], 80.00th=[26084], 90.00th=[31327], 95.00th=[34866], 00:30:40.772 | 99.00th=[38011], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:30:40.772 | 99.99th=[45351] 00:30:40.772 bw ( KiB/s): min= 2048, max= 3616, per=4.79%, avg=2776.40, stdev=446.69, samples=20 00:30:40.772 iops : min= 512, max= 904, avg=694.10, stdev=111.67, samples=20 00:30:40.772 lat (msec) : 4=0.23%, 10=0.75%, 20=33.15%, 50=65.87% 00:30:40.772 cpu : usr=96.96%, sys=2.61%, ctx=23, majf=0, minf=90 00:30:40.772 IO depths : 1=1.3%, 2=2.6%, 4=9.9%, 8=74.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:40.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 issued rwts: total=6950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.772 filename0: (groupid=0, jobs=1): err= 0: pid=2321449: Wed May 15 12:32:07 2024 00:30:40.772 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10016msec) 00:30:40.772 slat (nsec): min=6182, max=73014, avg=18257.34, stdev=9769.61 00:30:40.772 clat (usec): min=10101, max=55281, avg=28327.16, stdev=5894.97 00:30:40.772 lat (usec): min=10112, max=55298, avg=28345.42, stdev=5894.63 00:30:40.772 clat percentiles (usec): 00:30:40.772 | 1.00th=[14877], 5.00th=[20579], 10.00th=[23987], 20.00th=[24511], 00:30:40.772 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[27919], 00:30:40.772 | 70.00th=[31065], 80.00th=[33162], 90.00th=[36439], 95.00th=[38536], 00:30:40.772 | 99.00th=[46924], 99.50th=[50070], 99.90th=[55313], 99.95th=[55313], 00:30:40.772 | 99.99th=[55313] 00:30:40.772 bw ( KiB/s): min= 2043, max= 2384, per=3.88%, avg=2246.15, stdev=94.10, samples=20 00:30:40.772 iops : min= 510, max= 596, avg=561.50, stdev=23.61, samples=20 00:30:40.772 lat (msec) : 20=4.70%, 50=94.80%, 100=0.50% 00:30:40.772 cpu : usr=97.34%, sys=2.25%, ctx=14, majf=0, minf=47 00:30:40.772 IO depths : 1=0.7%, 2=1.6%, 4=9.7%, 8=74.8%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:40.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 complete : 0=0.0%, 4=90.6%, 8=5.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 issued rwts: total=5635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.772 filename0: (groupid=0, jobs=1): err= 0: pid=2321450: Wed May 15 12:32:07 2024 00:30:40.772 read: IOPS=618, BW=2474KiB/s (2533kB/s)(24.2MiB/10006msec) 00:30:40.772 slat (nsec): min=4313, max=67673, avg=16739.77, stdev=10108.98 00:30:40.772 clat (usec): min=6698, max=52833, avg=25790.16, stdev=3725.69 00:30:40.772 lat (usec): min=6713, max=52846, avg=25806.90, stdev=3725.01 00:30:40.772 clat percentiles (usec): 00:30:40.772 | 1.00th=[14877], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:30:40.772 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:30:40.772 | 70.00th=[25822], 80.00th=[26346], 90.00th=[28705], 95.00th=[32900], 00:30:40.772 | 99.00th=[42206], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:30:40.772 | 99.99th=[52691] 00:30:40.772 bw ( KiB/s): min= 2304, max= 2560, per=4.24%, avg=2457.26, stdev=71.62, samples=19 00:30:40.772 iops : min= 576, max= 640, avg=614.32, stdev=17.90, samples=19 00:30:40.772 lat (msec) : 10=0.26%, 20=2.12%, 50=97.56%, 100=0.06% 00:30:40.772 cpu : usr=97.56%, sys=1.98%, ctx=22, majf=0, minf=72 00:30:40.772 IO depths : 1=0.3%, 2=0.7%, 4=5.5%, 8=78.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:30:40.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 complete : 0=0.0%, 4=90.0%, 8=6.4%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 issued rwts: total=6188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.772 filename0: (groupid=0, jobs=1): err= 0: pid=2321451: Wed May 15 12:32:07 2024 00:30:40.772 read: IOPS=588, BW=2355KiB/s (2411kB/s)(23.1MiB/10024msec) 00:30:40.772 slat (nsec): min=6552, max=67540, avg=17186.28, stdev=9319.95 00:30:40.772 clat (usec): min=9867, max=49812, avg=27073.14, stdev=5265.86 00:30:40.772 lat (usec): min=9880, max=49828, avg=27090.32, stdev=5266.66 00:30:40.772 clat percentiles (usec): 00:30:40.772 | 1.00th=[14746], 5.00th=[19006], 10.00th=[23462], 20.00th=[24249], 00:30:40.772 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:30:40.772 | 70.00th=[27132], 80.00th=[31327], 90.00th=[34341], 95.00th=[37487], 00:30:40.772 | 99.00th=[44303], 99.50th=[45351], 99.90th=[48497], 99.95th=[49546], 00:30:40.772 | 99.99th=[50070] 00:30:40.772 bw ( KiB/s): min= 2128, max= 2480, per=4.07%, avg=2354.00, stdev=97.01, samples=20 00:30:40.772 iops : min= 532, max= 620, avg=588.50, stdev=24.25, samples=20 00:30:40.772 lat (msec) : 10=0.02%, 20=5.73%, 50=94.26% 00:30:40.772 cpu : usr=97.30%, sys=2.27%, ctx=16, majf=0, minf=67 00:30:40.772 IO depths : 1=0.6%, 2=1.3%, 4=8.8%, 8=76.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:40.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 complete : 0=0.0%, 4=89.9%, 8=5.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 issued rwts: total=5901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.772 filename0: (groupid=0, jobs=1): err= 0: pid=2321452: Wed May 15 12:32:07 2024 00:30:40.772 read: IOPS=622, BW=2491KiB/s (2550kB/s)(24.4MiB/10018msec) 00:30:40.772 slat (nsec): min=6557, max=72024, avg=15554.51, stdev=8425.75 00:30:40.772 clat (usec): min=4162, max=47186, avg=25600.54, stdev=4465.04 00:30:40.772 lat (usec): min=4176, max=47211, avg=25616.10, stdev=4466.32 00:30:40.772 clat percentiles (usec): 00:30:40.772 | 1.00th=[11994], 5.00th=[18482], 10.00th=[22938], 20.00th=[23987], 00:30:40.772 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:40.772 | 70.00th=[25822], 80.00th=[26346], 90.00th=[31589], 95.00th=[33817], 00:30:40.772 | 99.00th=[39584], 99.50th=[40633], 99.90th=[45351], 99.95th=[46924], 00:30:40.772 | 99.99th=[47449] 00:30:40.772 bw ( KiB/s): min= 2320, max= 2784, per=4.30%, avg=2488.80, stdev=116.48, samples=20 00:30:40.772 iops : min= 580, max= 696, avg=622.20, stdev=29.12, samples=20 00:30:40.772 lat (msec) : 10=0.80%, 20=6.19%, 50=93.01% 00:30:40.772 cpu : usr=97.14%, sys=2.42%, ctx=16, majf=0, minf=90 00:30:40.772 IO depths : 1=0.9%, 2=1.8%, 4=8.9%, 8=75.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:40.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 issued rwts: total=6238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.772 filename0: (groupid=0, jobs=1): err= 0: pid=2321453: Wed May 15 12:32:07 2024 00:30:40.772 read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10005msec) 00:30:40.772 slat (nsec): min=4233, max=68036, avg=16228.84, stdev=9672.67 00:30:40.772 clat (usec): min=6913, max=50909, avg=26914.10, stdev=4798.99 00:30:40.772 lat (usec): min=6920, max=50917, avg=26930.33, stdev=4797.88 00:30:40.772 clat percentiles (usec): 00:30:40.772 | 1.00th=[13960], 5.00th=[22676], 10.00th=[23987], 20.00th=[24511], 00:30:40.772 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:30:40.772 | 70.00th=[26346], 80.00th=[30278], 90.00th=[33817], 95.00th=[36963], 00:30:40.772 | 99.00th=[39584], 99.50th=[43779], 99.90th=[50594], 99.95th=[51119], 00:30:40.772 | 99.99th=[51119] 00:30:40.772 bw ( KiB/s): min= 1920, max= 2560, per=4.07%, avg=2358.32, stdev=142.75, samples=19 00:30:40.772 iops : min= 480, max= 640, avg=589.58, stdev=35.69, samples=19 00:30:40.772 lat (msec) : 10=0.22%, 20=3.37%, 50=96.14%, 100=0.27% 00:30:40.772 cpu : usr=97.18%, sys=2.38%, ctx=19, majf=0, minf=70 00:30:40.772 IO depths : 1=0.6%, 2=1.5%, 4=8.5%, 8=76.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:40.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.772 issued rwts: total=5928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.772 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.772 filename0: (groupid=0, jobs=1): err= 0: pid=2321454: Wed May 15 12:32:07 2024 00:30:40.772 read: IOPS=604, BW=2417KiB/s (2475kB/s)(23.6MiB/10018msec) 00:30:40.772 slat (nsec): min=6499, max=84536, avg=20808.78, stdev=12441.93 00:30:40.772 clat (usec): min=12921, max=50014, avg=26338.85, stdev=4735.44 00:30:40.772 lat (usec): min=12935, max=50042, avg=26359.66, stdev=4735.06 00:30:40.772 clat percentiles (usec): 00:30:40.772 | 1.00th=[15270], 5.00th=[19006], 10.00th=[22676], 20.00th=[24249], 00:30:40.772 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25560], 00:30:40.772 | 70.00th=[26346], 80.00th=[29754], 90.00th=[32900], 95.00th=[36963], 00:30:40.772 | 99.00th=[39060], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:30:40.772 | 99.99th=[50070] 00:30:40.772 bw ( KiB/s): min= 2096, max= 2688, per=4.17%, avg=2415.20, stdev=153.95, samples=20 00:30:40.772 iops : min= 524, max= 672, avg=603.80, stdev=38.49, samples=20 00:30:40.772 lat (msec) : 20=7.07%, 50=92.91%, 100=0.02% 00:30:40.772 cpu : usr=97.55%, sys=1.99%, ctx=59, majf=0, minf=63 00:30:40.772 IO depths : 1=1.9%, 2=3.8%, 4=12.1%, 8=70.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:40.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 issued rwts: total=6054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.773 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.773 filename1: (groupid=0, jobs=1): err= 0: pid=2321455: Wed May 15 12:32:07 2024 00:30:40.773 read: IOPS=623, BW=2493KiB/s (2553kB/s)(24.4MiB/10017msec) 00:30:40.773 slat (nsec): min=6609, max=67069, avg=19268.25, stdev=9876.03 00:30:40.773 clat (usec): min=11630, max=42721, avg=25528.17, stdev=3406.68 00:30:40.773 lat (usec): min=11637, max=42738, avg=25547.43, stdev=3407.50 00:30:40.773 clat percentiles (usec): 00:30:40.773 | 1.00th=[16057], 5.00th=[20841], 10.00th=[23725], 20.00th=[24249], 00:30:40.773 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:40.773 | 70.00th=[25560], 80.00th=[26084], 90.00th=[30802], 95.00th=[33162], 00:30:40.773 | 99.00th=[36439], 99.50th=[36963], 99.90th=[38536], 99.95th=[40633], 00:30:40.773 | 99.99th=[42730] 00:30:40.773 bw ( KiB/s): min= 2176, max= 2640, per=4.30%, avg=2491.20, stdev=118.04, samples=20 00:30:40.773 iops : min= 544, max= 660, avg=622.80, stdev=29.51, samples=20 00:30:40.773 lat (msec) : 20=4.58%, 50=95.42% 00:30:40.773 cpu : usr=96.95%, sys=2.62%, ctx=25, majf=0, minf=65 00:30:40.773 IO depths : 1=2.2%, 2=4.5%, 4=13.0%, 8=69.6%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:40.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 complete : 0=0.0%, 4=90.9%, 8=3.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 issued rwts: total=6244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.773 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.773 filename1: (groupid=0, jobs=1): err= 0: pid=2321456: Wed May 15 12:32:07 2024 00:30:40.773 read: IOPS=580, BW=2322KiB/s (2377kB/s)(22.8MiB/10034msec) 00:30:40.773 slat (nsec): min=6393, max=96993, avg=29218.50, stdev=15917.99 00:30:40.773 clat (usec): min=3981, max=49792, avg=27382.79, stdev=6003.14 00:30:40.773 lat (usec): min=4012, max=49805, avg=27412.00, stdev=6000.30 00:30:40.773 clat percentiles (usec): 00:30:40.773 | 1.00th=[10290], 5.00th=[17695], 10.00th=[22938], 20.00th=[24249], 00:30:40.773 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:30:40.773 | 70.00th=[30540], 80.00th=[32637], 90.00th=[35914], 95.00th=[37487], 00:30:40.773 | 99.00th=[43254], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:30:40.773 | 99.99th=[49546] 00:30:40.773 bw ( KiB/s): min= 1920, max= 2640, per=4.01%, avg=2323.20, stdev=221.53, samples=20 00:30:40.773 iops : min= 480, max= 660, avg=580.80, stdev=55.38, samples=20 00:30:40.773 lat (msec) : 4=0.02%, 10=0.81%, 20=5.99%, 50=93.18% 00:30:40.773 cpu : usr=97.71%, sys=1.64%, ctx=164, majf=0, minf=55 00:30:40.773 IO depths : 1=1.9%, 2=3.9%, 4=12.2%, 8=70.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:40.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 issued rwts: total=5824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.773 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.773 filename1: (groupid=0, jobs=1): err= 0: pid=2321457: Wed May 15 12:32:07 2024 00:30:40.773 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.3MiB/10016msec) 00:30:40.773 slat (nsec): min=6422, max=86836, avg=18169.68, stdev=10455.50 00:30:40.773 clat (usec): min=9713, max=57707, avg=25686.37, stdev=3886.31 00:30:40.773 lat (usec): min=9721, max=57735, avg=25704.54, stdev=3887.34 00:30:40.773 clat percentiles (usec): 00:30:40.773 | 1.00th=[14746], 5.00th=[22414], 10.00th=[23725], 20.00th=[24249], 00:30:40.773 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:40.773 | 70.00th=[25560], 80.00th=[26084], 90.00th=[29754], 95.00th=[33162], 00:30:40.773 | 99.00th=[38011], 99.50th=[40109], 99.90th=[57410], 99.95th=[57934], 00:30:40.773 | 99.99th=[57934] 00:30:40.773 bw ( KiB/s): min= 2176, max= 2656, per=4.28%, avg=2477.20, stdev=110.89, samples=20 00:30:40.773 iops : min= 544, max= 664, avg=619.30, stdev=27.72, samples=20 00:30:40.773 lat (msec) : 10=0.16%, 20=3.96%, 50=95.62%, 100=0.26% 00:30:40.773 cpu : usr=97.28%, sys=2.29%, ctx=32, majf=0, minf=75 00:30:40.773 IO depths : 1=2.4%, 2=4.9%, 4=13.3%, 8=68.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:40.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 issued rwts: total=6209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.773 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.773 filename1: (groupid=0, jobs=1): err= 0: pid=2321458: Wed May 15 12:32:07 2024 00:30:40.773 read: IOPS=577, BW=2309KiB/s (2365kB/s)(22.6MiB/10029msec) 00:30:40.773 slat (nsec): min=6600, max=70632, avg=17495.70, stdev=9189.58 00:30:40.773 clat (usec): min=10580, max=51904, avg=27603.63, stdev=5546.13 00:30:40.773 lat (usec): min=10587, max=51922, avg=27621.13, stdev=5546.23 00:30:40.773 clat percentiles (usec): 00:30:40.773 | 1.00th=[14353], 5.00th=[19792], 10.00th=[23725], 20.00th=[24511], 00:30:40.773 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:30:40.773 | 70.00th=[29492], 80.00th=[32113], 90.00th=[35390], 95.00th=[37487], 00:30:40.773 | 99.00th=[44303], 99.50th=[47973], 99.90th=[50594], 99.95th=[50594], 00:30:40.773 | 99.99th=[51643] 00:30:40.773 bw ( KiB/s): min= 2048, max= 2512, per=3.99%, avg=2309.60, stdev=107.77, samples=20 00:30:40.773 iops : min= 512, max= 628, avg=577.40, stdev=26.94, samples=20 00:30:40.773 lat (msec) : 20=5.18%, 50=94.66%, 100=0.16% 00:30:40.773 cpu : usr=97.52%, sys=2.04%, ctx=16, majf=0, minf=92 00:30:40.773 IO depths : 1=0.5%, 2=1.2%, 4=8.7%, 8=76.2%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:40.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 complete : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 issued rwts: total=5790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.773 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.773 filename1: (groupid=0, jobs=1): err= 0: pid=2321459: Wed May 15 12:32:07 2024 00:30:40.773 read: IOPS=595, BW=2384KiB/s (2441kB/s)(23.3MiB/10013msec) 00:30:40.773 slat (nsec): min=6528, max=63671, avg=17560.33, stdev=9572.93 00:30:40.773 clat (usec): min=8816, max=58459, avg=26747.20, stdev=5345.14 00:30:40.773 lat (usec): min=8825, max=58483, avg=26764.76, stdev=5345.76 00:30:40.773 clat percentiles (usec): 00:30:40.773 | 1.00th=[13960], 5.00th=[18220], 10.00th=[22938], 20.00th=[24249], 00:30:40.773 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:30:40.773 | 70.00th=[26870], 80.00th=[31065], 90.00th=[34866], 95.00th=[36439], 00:30:40.773 | 99.00th=[42206], 99.50th=[43779], 99.90th=[51119], 99.95th=[58459], 00:30:40.773 | 99.99th=[58459] 00:30:40.773 bw ( KiB/s): min= 1952, max= 2528, per=4.11%, avg=2381.20, stdev=133.37, samples=20 00:30:40.773 iops : min= 488, max= 632, avg=595.30, stdev=33.34, samples=20 00:30:40.773 lat (msec) : 10=0.08%, 20=7.37%, 50=92.27%, 100=0.27% 00:30:40.773 cpu : usr=97.41%, sys=2.16%, ctx=15, majf=0, minf=92 00:30:40.773 IO depths : 1=0.6%, 2=1.3%, 4=9.1%, 8=75.7%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:40.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 complete : 0=0.0%, 4=90.4%, 8=5.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 issued rwts: total=5967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.773 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.773 filename1: (groupid=0, jobs=1): err= 0: pid=2321460: Wed May 15 12:32:07 2024 00:30:40.773 read: IOPS=596, BW=2388KiB/s (2445kB/s)(23.3MiB/10007msec) 00:30:40.773 slat (nsec): min=4328, max=67516, avg=18147.56, stdev=9680.41 00:30:40.773 clat (usec): min=7148, max=60424, avg=26701.60, stdev=5254.41 00:30:40.773 lat (usec): min=7156, max=60440, avg=26719.74, stdev=5254.15 00:30:40.773 clat percentiles (usec): 00:30:40.773 | 1.00th=[12911], 5.00th=[18220], 10.00th=[23200], 20.00th=[24249], 00:30:40.773 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:30:40.773 | 70.00th=[26870], 80.00th=[30802], 90.00th=[34341], 95.00th=[37487], 00:30:40.773 | 99.00th=[41157], 99.50th=[42206], 99.90th=[47449], 99.95th=[48497], 00:30:40.773 | 99.99th=[60556] 00:30:40.773 bw ( KiB/s): min= 2200, max= 2541, per=4.12%, avg=2383.65, stdev=74.93, samples=20 00:30:40.773 iops : min= 550, max= 635, avg=595.90, stdev=18.71, samples=20 00:30:40.773 lat (msec) : 10=0.17%, 20=6.40%, 50=93.42%, 100=0.02% 00:30:40.773 cpu : usr=97.30%, sys=2.27%, ctx=15, majf=0, minf=64 00:30:40.773 IO depths : 1=0.7%, 2=1.5%, 4=8.6%, 8=76.0%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:40.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 issued rwts: total=5973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.773 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.773 filename1: (groupid=0, jobs=1): err= 0: pid=2321461: Wed May 15 12:32:07 2024 00:30:40.773 read: IOPS=613, BW=2454KiB/s (2513kB/s)(24.0MiB/10004msec) 00:30:40.773 slat (nsec): min=6329, max=69406, avg=18553.42, stdev=10004.37 00:30:40.773 clat (usec): min=7177, max=72390, avg=25977.52, stdev=3874.21 00:30:40.773 lat (usec): min=7185, max=72404, avg=25996.07, stdev=3872.87 00:30:40.773 clat percentiles (usec): 00:30:40.773 | 1.00th=[17695], 5.00th=[23725], 10.00th=[23987], 20.00th=[24511], 00:30:40.773 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:40.773 | 70.00th=[25822], 80.00th=[26084], 90.00th=[29492], 95.00th=[33817], 00:30:40.773 | 99.00th=[39060], 99.50th=[42730], 99.90th=[60031], 99.95th=[71828], 00:30:40.773 | 99.99th=[72877] 00:30:40.773 bw ( KiB/s): min= 1931, max= 2576, per=4.21%, avg=2440.16, stdev=169.46, samples=19 00:30:40.773 iops : min= 482, max= 644, avg=610.00, stdev=42.49, samples=19 00:30:40.773 lat (msec) : 10=0.15%, 20=1.45%, 50=98.14%, 100=0.26% 00:30:40.773 cpu : usr=97.56%, sys=1.97%, ctx=18, majf=0, minf=80 00:30:40.773 IO depths : 1=0.4%, 2=1.5%, 4=9.1%, 8=74.8%, 16=14.2%, 32=0.0%, >=64=0.0% 00:30:40.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 complete : 0=0.0%, 4=91.0%, 8=4.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.773 issued rwts: total=6137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.773 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.773 filename1: (groupid=0, jobs=1): err= 0: pid=2321462: Wed May 15 12:32:07 2024 00:30:40.773 read: IOPS=571, BW=2286KiB/s (2341kB/s)(22.4MiB/10018msec) 00:30:40.773 slat (nsec): min=6525, max=70621, avg=18118.70, stdev=9514.19 00:30:40.773 clat (usec): min=11760, max=50985, avg=27887.94, stdev=5475.39 00:30:40.774 lat (usec): min=11774, max=50998, avg=27906.06, stdev=5475.21 00:30:40.774 clat percentiles (usec): 00:30:40.774 | 1.00th=[15139], 5.00th=[20579], 10.00th=[23725], 20.00th=[24511], 00:30:40.774 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:30:40.774 | 70.00th=[30016], 80.00th=[32637], 90.00th=[35914], 95.00th=[38011], 00:30:40.774 | 99.00th=[42730], 99.50th=[46924], 99.90th=[49546], 99.95th=[51119], 00:30:40.774 | 99.99th=[51119] 00:30:40.774 bw ( KiB/s): min= 2072, max= 2432, per=3.94%, avg=2283.60, stdev=104.61, samples=20 00:30:40.774 iops : min= 518, max= 608, avg=570.90, stdev=26.15, samples=20 00:30:40.774 lat (msec) : 20=4.38%, 50=95.55%, 100=0.07% 00:30:40.774 cpu : usr=97.25%, sys=2.31%, ctx=25, majf=0, minf=67 00:30:40.774 IO depths : 1=1.0%, 2=2.1%, 4=10.5%, 8=73.6%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 issued rwts: total=5725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.774 filename2: (groupid=0, jobs=1): err= 0: pid=2321463: Wed May 15 12:32:07 2024 00:30:40.774 read: IOPS=593, BW=2376KiB/s (2433kB/s)(23.2MiB/10018msec) 00:30:40.774 slat (nsec): min=6560, max=97318, avg=17403.82, stdev=9369.07 00:30:40.774 clat (usec): min=9307, max=44951, avg=26832.40, stdev=5560.46 00:30:40.774 lat (usec): min=9320, max=44972, avg=26849.80, stdev=5561.40 00:30:40.774 clat percentiles (usec): 00:30:40.774 | 1.00th=[13698], 5.00th=[17171], 10.00th=[20579], 20.00th=[23987], 00:30:40.774 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[26084], 00:30:40.774 | 70.00th=[28181], 80.00th=[31851], 90.00th=[35390], 95.00th=[36963], 00:30:40.774 | 99.00th=[40109], 99.50th=[43254], 99.90th=[44303], 99.95th=[44827], 00:30:40.774 | 99.99th=[44827] 00:30:40.774 bw ( KiB/s): min= 2024, max= 2536, per=4.10%, avg=2373.60, stdev=114.38, samples=20 00:30:40.774 iops : min= 506, max= 634, avg=593.40, stdev=28.60, samples=20 00:30:40.774 lat (msec) : 10=0.02%, 20=8.87%, 50=91.11% 00:30:40.774 cpu : usr=97.46%, sys=2.11%, ctx=15, majf=0, minf=58 00:30:40.774 IO depths : 1=1.3%, 2=2.5%, 4=11.0%, 8=73.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 issued rwts: total=5950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.774 filename2: (groupid=0, jobs=1): err= 0: pid=2321464: Wed May 15 12:32:07 2024 00:30:40.774 read: IOPS=585, BW=2342KiB/s (2398kB/s)(22.9MiB/10004msec) 00:30:40.774 slat (nsec): min=6536, max=84582, avg=17890.26, stdev=9923.07 00:30:40.774 clat (usec): min=6001, max=60189, avg=27227.10, stdev=5324.05 00:30:40.774 lat (usec): min=6011, max=60204, avg=27244.99, stdev=5323.20 00:30:40.774 clat percentiles (usec): 00:30:40.774 | 1.00th=[14353], 5.00th=[21103], 10.00th=[23725], 20.00th=[24511], 00:30:40.774 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:30:40.774 | 70.00th=[27132], 80.00th=[31327], 90.00th=[35390], 95.00th=[36963], 00:30:40.774 | 99.00th=[41157], 99.50th=[43779], 99.90th=[60031], 99.95th=[60031], 00:30:40.774 | 99.99th=[60031] 00:30:40.774 bw ( KiB/s): min= 2100, max= 2456, per=4.01%, avg=2323.58, stdev=97.00, samples=19 00:30:40.774 iops : min= 525, max= 614, avg=580.89, stdev=24.25, samples=19 00:30:40.774 lat (msec) : 10=0.22%, 20=4.32%, 50=95.19%, 100=0.27% 00:30:40.774 cpu : usr=97.36%, sys=2.22%, ctx=16, majf=0, minf=81 00:30:40.774 IO depths : 1=0.8%, 2=1.6%, 4=8.6%, 8=75.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 complete : 0=0.0%, 4=90.1%, 8=5.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 issued rwts: total=5858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.774 filename2: (groupid=0, jobs=1): err= 0: pid=2321465: Wed May 15 12:32:07 2024 00:30:40.774 read: IOPS=583, BW=2336KiB/s (2392kB/s)(22.8MiB/10005msec) 00:30:40.774 slat (nsec): min=4160, max=68347, avg=18376.79, stdev=10117.69 00:30:40.774 clat (usec): min=7283, max=49145, avg=27274.71, stdev=5054.10 00:30:40.774 lat (usec): min=7295, max=49154, avg=27293.09, stdev=5053.09 00:30:40.774 clat percentiles (usec): 00:30:40.774 | 1.00th=[14877], 5.00th=[21365], 10.00th=[23725], 20.00th=[24249], 00:30:40.774 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:30:40.774 | 70.00th=[28443], 80.00th=[31589], 90.00th=[35390], 95.00th=[37487], 00:30:40.774 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45351], 99.95th=[45876], 00:30:40.774 | 99.99th=[49021] 00:30:40.774 bw ( KiB/s): min= 2048, max= 2560, per=4.02%, avg=2325.05, stdev=149.65, samples=19 00:30:40.774 iops : min= 512, max= 640, avg=581.26, stdev=37.41, samples=19 00:30:40.774 lat (msec) : 10=0.07%, 20=4.31%, 50=95.62% 00:30:40.774 cpu : usr=97.11%, sys=2.46%, ctx=20, majf=0, minf=52 00:30:40.774 IO depths : 1=2.1%, 2=4.5%, 4=13.5%, 8=68.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:30:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 complete : 0=0.0%, 4=91.3%, 8=3.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 issued rwts: total=5842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.774 filename2: (groupid=0, jobs=1): err= 0: pid=2321466: Wed May 15 12:32:07 2024 00:30:40.774 read: IOPS=620, BW=2483KiB/s (2543kB/s)(24.3MiB/10015msec) 00:30:40.774 slat (nsec): min=6567, max=62705, avg=17240.20, stdev=9093.50 00:30:40.774 clat (usec): min=13751, max=45183, avg=25653.66, stdev=3498.86 00:30:40.774 lat (usec): min=13759, max=45207, avg=25670.90, stdev=3499.55 00:30:40.774 clat percentiles (usec): 00:30:40.774 | 1.00th=[16712], 5.00th=[20841], 10.00th=[23725], 20.00th=[24249], 00:30:40.774 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:40.774 | 70.00th=[25560], 80.00th=[26346], 90.00th=[30540], 95.00th=[32113], 00:30:40.774 | 99.00th=[38536], 99.50th=[40633], 99.90th=[44827], 99.95th=[44827], 00:30:40.774 | 99.99th=[45351] 00:30:40.774 bw ( KiB/s): min= 2256, max= 2608, per=4.28%, avg=2480.80, stdev=89.68, samples=20 00:30:40.774 iops : min= 564, max= 652, avg=620.20, stdev=22.42, samples=20 00:30:40.774 lat (msec) : 20=4.57%, 50=95.43% 00:30:40.774 cpu : usr=96.88%, sys=2.67%, ctx=21, majf=0, minf=90 00:30:40.774 IO depths : 1=1.7%, 2=3.3%, 4=11.0%, 8=72.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:30:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 complete : 0=0.0%, 4=90.3%, 8=4.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 issued rwts: total=6218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.774 filename2: (groupid=0, jobs=1): err= 0: pid=2321467: Wed May 15 12:32:07 2024 00:30:40.774 read: IOPS=637, BW=2552KiB/s (2613kB/s)(24.9MiB/10007msec) 00:30:40.774 slat (nsec): min=4140, max=70094, avg=23030.27, stdev=9607.43 00:30:40.774 clat (usec): min=6855, max=52621, avg=24868.85, stdev=1615.19 00:30:40.774 lat (usec): min=6862, max=52635, avg=24891.88, stdev=1615.28 00:30:40.774 clat percentiles (usec): 00:30:40.774 | 1.00th=[22676], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:30:40.774 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[25035], 00:30:40.774 | 70.00th=[25297], 80.00th=[25560], 90.00th=[25822], 95.00th=[26346], 00:30:40.774 | 99.00th=[27132], 99.50th=[27919], 99.90th=[38011], 99.95th=[38011], 00:30:40.774 | 99.99th=[52691] 00:30:40.774 bw ( KiB/s): min= 2432, max= 2688, per=4.40%, avg=2548.15, stdev=57.03, samples=20 00:30:40.774 iops : min= 608, max= 672, avg=637.00, stdev=14.25, samples=20 00:30:40.774 lat (msec) : 10=0.25%, 20=0.56%, 50=99.15%, 100=0.03% 00:30:40.774 cpu : usr=97.62%, sys=1.97%, ctx=14, majf=0, minf=54 00:30:40.774 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.774 filename2: (groupid=0, jobs=1): err= 0: pid=2321468: Wed May 15 12:32:07 2024 00:30:40.774 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10013msec) 00:30:40.774 slat (nsec): min=6262, max=67247, avg=17854.84, stdev=8825.21 00:30:40.774 clat (usec): min=10912, max=57764, avg=25944.32, stdev=3850.79 00:30:40.774 lat (usec): min=10929, max=57781, avg=25962.17, stdev=3850.63 00:30:40.774 clat percentiles (usec): 00:30:40.774 | 1.00th=[15139], 5.00th=[23200], 10.00th=[23987], 20.00th=[24511], 00:30:40.774 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:40.774 | 70.00th=[25822], 80.00th=[26346], 90.00th=[30540], 95.00th=[33817], 00:30:40.774 | 99.00th=[39060], 99.50th=[40109], 99.90th=[50070], 99.95th=[57410], 00:30:40.774 | 99.99th=[57934] 00:30:40.774 bw ( KiB/s): min= 2224, max= 2592, per=4.24%, avg=2456.20, stdev=106.88, samples=20 00:30:40.774 iops : min= 556, max= 648, avg=614.05, stdev=26.72, samples=20 00:30:40.774 lat (msec) : 20=3.54%, 50=96.20%, 100=0.26% 00:30:40.774 cpu : usr=97.22%, sys=2.32%, ctx=18, majf=0, minf=62 00:30:40.774 IO depths : 1=0.4%, 2=0.9%, 4=7.8%, 8=77.5%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:40.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.774 issued rwts: total=6150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.774 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.774 filename2: (groupid=0, jobs=1): err= 0: pid=2321469: Wed May 15 12:32:07 2024 00:30:40.774 read: IOPS=563, BW=2254KiB/s (2308kB/s)(22.0MiB/10004msec) 00:30:40.774 slat (nsec): min=4437, max=63047, avg=17142.58, stdev=9445.86 00:30:40.774 clat (usec): min=6797, max=60203, avg=28287.58, stdev=5664.64 00:30:40.774 lat (usec): min=6810, max=60217, avg=28304.72, stdev=5663.86 00:30:40.774 clat percentiles (usec): 00:30:40.774 | 1.00th=[13566], 5.00th=[22676], 10.00th=[23987], 20.00th=[24511], 00:30:40.774 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[28967], 00:30:40.774 | 70.00th=[31327], 80.00th=[33817], 90.00th=[35914], 95.00th=[37487], 00:30:40.774 | 99.00th=[40633], 99.50th=[43254], 99.90th=[60031], 99.95th=[60031], 00:30:40.774 | 99.99th=[60031] 00:30:40.774 bw ( KiB/s): min= 1920, max= 2488, per=3.86%, avg=2233.26, stdev=191.23, samples=19 00:30:40.774 iops : min= 480, max= 622, avg=558.32, stdev=47.81, samples=19 00:30:40.774 lat (msec) : 10=0.18%, 20=3.49%, 50=96.04%, 100=0.28% 00:30:40.774 cpu : usr=97.17%, sys=2.41%, ctx=18, majf=0, minf=79 00:30:40.775 IO depths : 1=1.7%, 2=3.5%, 4=11.8%, 8=71.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.775 complete : 0=0.0%, 4=90.9%, 8=4.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.775 issued rwts: total=5637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.775 filename2: (groupid=0, jobs=1): err= 0: pid=2321470: Wed May 15 12:32:07 2024 00:30:40.775 read: IOPS=623, BW=2495KiB/s (2555kB/s)(24.4MiB/10017msec) 00:30:40.775 slat (nsec): min=6553, max=63514, avg=17109.88, stdev=8734.91 00:30:40.775 clat (usec): min=10469, max=47893, avg=25546.56, stdev=3141.71 00:30:40.775 lat (usec): min=10476, max=47907, avg=25563.67, stdev=3141.60 00:30:40.775 clat percentiles (usec): 00:30:40.775 | 1.00th=[17171], 5.00th=[22938], 10.00th=[23725], 20.00th=[24249], 00:30:40.775 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:40.775 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[31589], 00:30:40.775 | 99.00th=[38536], 99.50th=[39060], 99.90th=[44303], 99.95th=[47973], 00:30:40.775 | 99.99th=[47973] 00:30:40.775 bw ( KiB/s): min= 2176, max= 2640, per=4.30%, avg=2492.80, stdev=117.62, samples=20 00:30:40.775 iops : min= 544, max= 660, avg=623.20, stdev=29.40, samples=20 00:30:40.775 lat (msec) : 20=2.34%, 50=97.66% 00:30:40.775 cpu : usr=97.22%, sys=2.36%, ctx=14, majf=0, minf=79 00:30:40.775 IO depths : 1=1.2%, 2=3.1%, 4=11.7%, 8=72.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:40.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.775 complete : 0=0.0%, 4=90.8%, 8=3.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.775 issued rwts: total=6248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.775 00:30:40.775 Run status group 0 (all jobs): 00:30:40.775 READ: bw=56.5MiB/s (59.3MB/s), 2250KiB/s-2774KiB/s (2304kB/s-2840kB/s), io=567MiB (595MB), run=10004-10034msec 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 bdev_null0 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 [2024-05-15 12:32:08.042771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 bdev_null1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.775 { 00:30:40.775 "params": { 00:30:40.775 "name": "Nvme$subsystem", 00:30:40.775 "trtype": "$TEST_TRANSPORT", 00:30:40.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.775 "adrfam": "ipv4", 00:30:40.775 "trsvcid": "$NVMF_PORT", 00:30:40.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.775 "hdgst": ${hdgst:-false}, 00:30:40.775 "ddgst": ${ddgst:-false} 00:30:40.775 }, 00:30:40.775 "method": "bdev_nvme_attach_controller" 00:30:40.775 } 00:30:40.775 EOF 00:30:40.775 )") 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.775 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.776 { 00:30:40.776 "params": { 00:30:40.776 "name": "Nvme$subsystem", 00:30:40.776 "trtype": "$TEST_TRANSPORT", 00:30:40.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.776 "adrfam": "ipv4", 00:30:40.776 "trsvcid": "$NVMF_PORT", 00:30:40.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.776 "hdgst": ${hdgst:-false}, 00:30:40.776 "ddgst": ${ddgst:-false} 00:30:40.776 }, 00:30:40.776 "method": "bdev_nvme_attach_controller" 00:30:40.776 } 00:30:40.776 EOF 00:30:40.776 )") 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:40.776 "params": { 00:30:40.776 "name": "Nvme0", 00:30:40.776 "trtype": "tcp", 00:30:40.776 "traddr": "10.0.0.2", 00:30:40.776 "adrfam": "ipv4", 00:30:40.776 "trsvcid": "4420", 00:30:40.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:40.776 "hdgst": false, 00:30:40.776 "ddgst": false 00:30:40.776 }, 00:30:40.776 "method": "bdev_nvme_attach_controller" 00:30:40.776 },{ 00:30:40.776 "params": { 00:30:40.776 "name": "Nvme1", 00:30:40.776 "trtype": "tcp", 00:30:40.776 "traddr": "10.0.0.2", 00:30:40.776 "adrfam": "ipv4", 00:30:40.776 "trsvcid": "4420", 00:30:40.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:40.776 "hdgst": false, 00:30:40.776 "ddgst": false 00:30:40.776 }, 00:30:40.776 "method": "bdev_nvme_attach_controller" 00:30:40.776 }' 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:40.776 12:32:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.776 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:40.776 ... 00:30:40.776 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:40.776 ... 00:30:40.776 fio-3.35 00:30:40.776 Starting 4 threads 00:30:40.776 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.033 00:30:46.033 filename0: (groupid=0, jobs=1): err= 0: pid=2323468: Wed May 15 12:32:14 2024 00:30:46.033 read: IOPS=2900, BW=22.7MiB/s (23.8MB/s)(113MiB/5003msec) 00:30:46.033 slat (nsec): min=5934, max=38285, avg=8424.47, stdev=2913.48 00:30:46.033 clat (usec): min=1208, max=45932, avg=2736.55, stdev=1100.89 00:30:46.033 lat (usec): min=1218, max=45966, avg=2744.97, stdev=1100.93 00:30:46.033 clat percentiles (usec): 00:30:46.033 | 1.00th=[ 1844], 5.00th=[ 2040], 10.00th=[ 2180], 20.00th=[ 2343], 00:30:46.033 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:30:46.033 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3425], 00:30:46.033 | 99.00th=[ 3916], 99.50th=[ 4113], 99.90th=[ 4752], 99.95th=[45876], 00:30:46.033 | 99.99th=[45876] 00:30:46.033 bw ( KiB/s): min=21840, max=24192, per=26.57%, avg=23204.80, stdev=620.32, samples=10 00:30:46.033 iops : min= 2730, max= 3024, avg=2900.60, stdev=77.54, samples=10 00:30:46.033 lat (msec) : 2=3.79%, 4=95.50%, 10=0.65%, 50=0.06% 00:30:46.033 cpu : usr=94.12%, sys=5.52%, ctx=9, majf=0, minf=0 00:30:46.033 IO depths : 1=0.1%, 2=1.2%, 4=66.3%, 8=32.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.033 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.033 issued rwts: total=14511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.033 filename0: (groupid=0, jobs=1): err= 0: pid=2323469: Wed May 15 12:32:14 2024 00:30:46.033 read: IOPS=2821, BW=22.0MiB/s (23.1MB/s)(110MiB/5001msec) 00:30:46.033 slat (nsec): min=5882, max=29170, avg=8275.72, stdev=2680.05 00:30:46.033 clat (usec): min=1538, max=45742, avg=2814.27, stdev=1111.44 00:30:46.033 lat (usec): min=1544, max=45764, avg=2822.55, stdev=1111.44 00:30:46.033 clat percentiles (usec): 00:30:46.033 | 1.00th=[ 1893], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2409], 00:30:46.033 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2900], 00:30:46.033 | 70.00th=[ 2966], 80.00th=[ 3130], 90.00th=[ 3359], 95.00th=[ 3523], 00:30:46.033 | 99.00th=[ 3982], 99.50th=[ 4146], 99.90th=[ 4817], 99.95th=[45876], 00:30:46.033 | 99.99th=[45876] 00:30:46.033 bw ( KiB/s): min=21040, max=23392, per=25.78%, avg=22513.78, stdev=742.51, samples=9 00:30:46.033 iops : min= 2630, max= 2924, avg=2814.22, stdev=92.81, samples=9 00:30:46.033 lat (msec) : 2=2.54%, 4=96.53%, 10=0.88%, 50=0.06% 00:30:46.033 cpu : usr=93.40%, sys=6.20%, ctx=6, majf=0, minf=9 00:30:46.033 IO depths : 1=0.1%, 2=1.1%, 4=65.9%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.033 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.033 issued rwts: total=14112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.033 filename1: (groupid=0, jobs=1): err= 0: pid=2323470: Wed May 15 12:32:14 2024 00:30:46.033 read: IOPS=2932, BW=22.9MiB/s (24.0MB/s)(115MiB/5003msec) 00:30:46.033 slat (nsec): min=5916, max=42836, avg=8234.78, stdev=2649.43 00:30:46.033 clat (usec): min=1139, max=5693, avg=2705.86, stdev=429.99 00:30:46.033 lat (usec): min=1145, max=5717, avg=2714.09, stdev=429.97 00:30:46.033 clat percentiles (usec): 00:30:46.033 | 1.00th=[ 1729], 5.00th=[ 2040], 10.00th=[ 2180], 20.00th=[ 2343], 00:30:46.033 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2802], 00:30:46.033 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3392], 00:30:46.033 | 99.00th=[ 3884], 99.50th=[ 4178], 99.90th=[ 4555], 99.95th=[ 4752], 00:30:46.033 | 99.99th=[ 5669] 00:30:46.033 bw ( KiB/s): min=22720, max=25010, per=26.87%, avg=23469.00, stdev=679.23, samples=10 00:30:46.033 iops : min= 2840, max= 3126, avg=2933.60, stdev=84.84, samples=10 00:30:46.033 lat (msec) : 2=4.09%, 4=95.16%, 10=0.75% 00:30:46.033 cpu : usr=93.86%, sys=5.78%, ctx=7, majf=0, minf=9 00:30:46.033 IO depths : 1=0.1%, 2=1.2%, 4=66.8%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.033 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.033 issued rwts: total=14673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.033 filename1: (groupid=0, jobs=1): err= 0: pid=2323471: Wed May 15 12:32:14 2024 00:30:46.033 read: IOPS=2264, BW=17.7MiB/s (18.5MB/s)(88.5MiB/5001msec) 00:30:46.033 slat (nsec): min=5869, max=42194, avg=8456.26, stdev=2802.87 00:30:46.033 clat (usec): min=1983, max=8538, avg=3510.89, stdev=638.83 00:30:46.033 lat (usec): min=1990, max=8559, avg=3519.34, stdev=638.78 00:30:46.033 clat percentiles (usec): 00:30:46.033 | 1.00th=[ 2311], 5.00th=[ 2606], 10.00th=[ 2769], 20.00th=[ 2999], 00:30:46.033 | 30.00th=[ 3163], 40.00th=[ 3294], 50.00th=[ 3425], 60.00th=[ 3589], 00:30:46.033 | 70.00th=[ 3752], 80.00th=[ 4015], 90.00th=[ 4359], 95.00th=[ 4686], 00:30:46.033 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 6849], 99.95th=[ 8455], 00:30:46.033 | 99.99th=[ 8586] 00:30:46.033 bw ( KiB/s): min=17360, max=19072, per=20.70%, avg=18083.56, stdev=501.06, samples=9 00:30:46.033 iops : min= 2170, max= 2384, avg=2260.44, stdev=62.63, samples=9 00:30:46.033 lat (msec) : 2=0.01%, 4=79.40%, 10=20.60% 00:30:46.033 cpu : usr=94.44%, sys=5.22%, ctx=7, majf=0, minf=9 00:30:46.033 IO depths : 1=0.2%, 2=2.0%, 4=65.9%, 8=32.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.033 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.033 issued rwts: total=11323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.033 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.033 00:30:46.033 Run status group 0 (all jobs): 00:30:46.033 READ: bw=85.3MiB/s (89.4MB/s), 17.7MiB/s-22.9MiB/s (18.5MB/s-24.0MB/s), io=427MiB (447MB), run=5001-5003msec 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.033 00:30:46.033 real 0m24.539s 00:30:46.033 user 4m55.201s 00:30:46.033 sys 0m8.372s 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:46.033 12:32:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.033 ************************************ 00:30:46.033 END TEST fio_dif_rand_params 00:30:46.033 ************************************ 00:30:46.033 12:32:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:46.033 12:32:14 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:46.033 12:32:14 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:46.033 12:32:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.033 ************************************ 00:30:46.033 START TEST fio_dif_digest 00:30:46.033 ************************************ 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.033 bdev_null0 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:46.033 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.034 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.291 [2024-05-15 12:32:14.576178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.291 { 00:30:46.291 "params": { 00:30:46.291 "name": "Nvme$subsystem", 00:30:46.291 "trtype": "$TEST_TRANSPORT", 00:30:46.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.291 "adrfam": "ipv4", 00:30:46.291 "trsvcid": "$NVMF_PORT", 00:30:46.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.291 "hdgst": ${hdgst:-false}, 00:30:46.291 "ddgst": ${ddgst:-false} 00:30:46.291 }, 00:30:46.291 "method": "bdev_nvme_attach_controller" 00:30:46.291 } 00:30:46.291 EOF 00:30:46.291 )") 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.291 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:46.292 "params": { 00:30:46.292 "name": "Nvme0", 00:30:46.292 "trtype": "tcp", 00:30:46.292 "traddr": "10.0.0.2", 00:30:46.292 "adrfam": "ipv4", 00:30:46.292 "trsvcid": "4420", 00:30:46.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.292 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.292 "hdgst": true, 00:30:46.292 "ddgst": true 00:30:46.292 }, 00:30:46.292 "method": "bdev_nvme_attach_controller" 00:30:46.292 }' 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:46.292 12:32:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.550 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:46.550 ... 00:30:46.550 fio-3.35 00:30:46.550 Starting 3 threads 00:30:46.550 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.778 00:30:58.778 filename0: (groupid=0, jobs=1): err= 0: pid=2324685: Wed May 15 12:32:25 2024 00:30:58.778 read: IOPS=319, BW=39.9MiB/s (41.8MB/s)(401MiB/10046msec) 00:30:58.778 slat (nsec): min=6218, max=44960, avg=10842.34, stdev=2150.85 00:30:58.778 clat (usec): min=4638, max=61469, avg=9371.42, stdev=2555.85 00:30:58.778 lat (usec): min=4644, max=61495, avg=9382.26, stdev=2556.38 00:30:58.778 clat percentiles (usec): 00:30:58.778 | 1.00th=[ 5080], 5.00th=[ 6128], 10.00th=[ 6915], 20.00th=[ 7635], 00:30:58.778 | 30.00th=[ 8225], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10159], 00:30:58.778 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:30:58.778 | 99.00th=[12518], 99.50th=[12911], 99.90th=[50070], 99.95th=[61080], 00:30:58.778 | 99.99th=[61604] 00:30:58.778 bw ( KiB/s): min=33024, max=47360, per=39.42%, avg=41024.00, stdev=3611.68, samples=20 00:30:58.778 iops : min= 258, max= 370, avg=320.50, stdev=28.22, samples=20 00:30:58.778 lat (msec) : 10=56.69%, 20=43.16%, 50=0.06%, 100=0.09% 00:30:58.778 cpu : usr=92.64%, sys=6.99%, ctx=17, majf=0, minf=155 00:30:58.778 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.778 issued rwts: total=3207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.778 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.778 filename0: (groupid=0, jobs=1): err= 0: pid=2324686: Wed May 15 12:32:25 2024 00:30:58.778 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(332MiB/10046msec) 00:30:58.778 slat (nsec): min=6212, max=27983, avg=11038.27, stdev=1924.43 00:30:58.778 clat (usec): min=5481, max=92898, avg=11305.01, stdev=8384.38 00:30:58.778 lat (usec): min=5493, max=92905, avg=11316.05, stdev=8384.42 00:30:58.778 clat percentiles (usec): 00:30:58.778 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8586], 00:30:58.778 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10290], 00:30:58.778 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11469], 95.00th=[12518], 00:30:58.778 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[92799], 00:30:58.778 | 99.99th=[92799] 00:30:58.778 bw ( KiB/s): min=27392, max=41216, per=32.68%, avg=34009.60, stdev=3432.87, samples=20 00:30:58.778 iops : min= 214, max= 322, avg=265.70, stdev=26.82, samples=20 00:30:58.778 lat (msec) : 10=51.49%, 20=44.68%, 50=0.38%, 100=3.46% 00:30:58.778 cpu : usr=92.15%, sys=7.29%, ctx=15, majf=0, minf=122 00:30:58.778 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.778 issued rwts: total=2659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.778 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.778 filename0: (groupid=0, jobs=1): err= 0: pid=2324687: Wed May 15 12:32:25 2024 00:30:58.778 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(288MiB/10007msec) 00:30:58.778 slat (nsec): min=6271, max=28875, avg=11535.09, stdev=1862.68 00:30:58.778 clat (usec): min=6064, max=96958, avg=13031.52, stdev=10031.88 00:30:58.778 lat (usec): min=6076, max=96970, avg=13043.06, stdev=10031.91 00:30:58.778 clat percentiles (usec): 00:30:58.778 | 1.00th=[ 6783], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 9765], 00:30:58.778 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11338], 00:30:58.778 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12911], 95.00th=[51119], 00:30:58.778 | 99.00th=[53740], 99.50th=[54789], 99.90th=[56361], 99.95th=[92799], 00:30:58.778 | 99.99th=[96994] 00:30:58.778 bw ( KiB/s): min=20224, max=36096, per=28.28%, avg=29427.20, stdev=3826.48, samples=20 00:30:58.778 iops : min= 158, max= 282, avg=229.90, stdev=29.89, samples=20 00:30:58.778 lat (msec) : 10=23.16%, 20=71.06%, 50=0.43%, 100=5.35% 00:30:58.778 cpu : usr=92.53%, sys=6.76%, ctx=18, majf=0, minf=148 00:30:58.778 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.778 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.778 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.778 00:30:58.778 Run status group 0 (all jobs): 00:30:58.778 READ: bw=102MiB/s (107MB/s), 28.7MiB/s-39.9MiB/s (30.1MB/s-41.8MB/s), io=1021MiB (1070MB), run=10007-10046msec 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.778 00:30:58.778 real 0m11.310s 00:30:58.778 user 0m36.760s 00:30:58.778 sys 0m2.525s 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:58.778 12:32:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:58.778 ************************************ 00:30:58.778 END TEST fio_dif_digest 00:30:58.778 ************************************ 00:30:58.778 12:32:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:58.778 12:32:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:58.778 rmmod nvme_tcp 00:30:58.778 rmmod nvme_fabrics 00:30:58.778 rmmod nvme_keyring 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2315615 ']' 00:30:58.778 12:32:25 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2315615 00:30:58.778 12:32:25 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 2315615 ']' 00:30:58.778 12:32:25 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 2315615 00:30:58.778 12:32:25 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:30:58.778 12:32:25 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:58.778 12:32:25 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2315615 00:30:58.778 12:32:26 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:58.778 12:32:26 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:58.778 12:32:26 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2315615' 00:30:58.778 killing process with pid 2315615 00:30:58.778 12:32:26 nvmf_dif -- common/autotest_common.sh@966 -- # kill 2315615 00:30:58.778 [2024-05-15 12:32:26.005236] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:58.778 12:32:26 nvmf_dif -- common/autotest_common.sh@971 -- # wait 2315615 00:30:58.778 12:32:26 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:58.778 12:32:26 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:00.674 Waiting for block devices as requested 00:31:00.674 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:00.674 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:00.674 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:00.932 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:00.932 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:00.932 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:01.189 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:01.189 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:01.189 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:01.189 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:01.447 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:01.447 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:01.447 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:01.704 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:01.704 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:01.704 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:01.962 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:01.962 12:32:30 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:01.962 12:32:30 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:01.962 12:32:30 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:01.962 12:32:30 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:01.962 12:32:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.962 12:32:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:01.962 12:32:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.487 12:32:32 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:04.487 00:31:04.487 real 1m16.021s 00:31:04.487 user 7m16.096s 00:31:04.487 sys 0m28.666s 00:31:04.487 12:32:32 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:04.487 12:32:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:04.487 ************************************ 00:31:04.487 END TEST nvmf_dif 00:31:04.487 ************************************ 00:31:04.487 12:32:32 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:04.487 12:32:32 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:04.487 12:32:32 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:04.487 12:32:32 -- common/autotest_common.sh@10 -- # set +x 00:31:04.487 ************************************ 00:31:04.487 START TEST nvmf_abort_qd_sizes 00:31:04.487 ************************************ 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:04.487 * Looking for test storage... 00:31:04.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.487 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:04.488 12:32:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:11.038 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:11.038 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:11.038 Found net devices under 0000:af:00.0: cvl_0_0 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:11.038 Found net devices under 0000:af:00.1: cvl_0_1 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:11.038 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:11.039 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.295 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.295 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.295 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:11.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:31:11.295 00:31:11.295 --- 10.0.0.2 ping statistics --- 00:31:11.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.295 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:31:11.295 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:31:11.295 00:31:11.295 --- 10.0.0.1 ping statistics --- 00:31:11.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.295 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:31:11.295 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.295 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:11.295 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:11.295 12:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:14.568 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:14.568 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:14.826 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:14.826 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:14.826 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:14.826 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:14.826 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:14.826 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:16.727 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2332990 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2332990 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 2332990 ']' 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:16.727 12:32:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:16.727 [2024-05-15 12:32:44.974014] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:31:16.727 [2024-05-15 12:32:44.974059] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.727 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.727 [2024-05-15 12:32:45.047425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.727 [2024-05-15 12:32:45.121682] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.727 [2024-05-15 12:32:45.121722] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.727 [2024-05-15 12:32:45.121737] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.727 [2024-05-15 12:32:45.121748] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.727 [2024-05-15 12:32:45.121759] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.727 [2024-05-15 12:32:45.121822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.727 [2024-05-15 12:32:45.121899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.727 [2024-05-15 12:32:45.121984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.727 [2024-05-15 12:32:45.121987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:17.291 12:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:17.549 12:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:17.549 ************************************ 00:31:17.549 START TEST spdk_target_abort 00:31:17.549 ************************************ 00:31:17.549 12:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:31:17.549 12:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:17.549 12:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:31:17.549 12:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.549 12:32:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.896 spdk_targetn1 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.896 [2024-05-15 12:32:48.727620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:20.896 [2024-05-15 12:32:48.763654] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:20.896 [2024-05-15 12:32:48.763934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:20.896 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:20.897 12:32:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.897 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.426 Initializing NVMe Controllers 00:31:23.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:23.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:23.426 Initialization complete. Launching workers. 00:31:23.426 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6048, failed: 0 00:31:23.426 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1372, failed to submit 4676 00:31:23.426 success 953, unsuccess 419, failed 0 00:31:23.426 12:32:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:23.427 12:32:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:23.685 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.968 Initializing NVMe Controllers 00:31:26.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:26.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:26.968 Initialization complete. Launching workers. 00:31:26.968 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8831, failed: 0 00:31:26.968 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7612 00:31:26.968 success 336, unsuccess 883, failed 0 00:31:26.968 12:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:26.968 12:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:26.968 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.255 Initializing NVMe Controllers 00:31:30.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:30.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:30.255 Initialization complete. Launching workers. 00:31:30.255 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35256, failed: 0 00:31:30.255 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2869, failed to submit 32387 00:31:30.255 success 688, unsuccess 2181, failed 0 00:31:30.255 12:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:30.255 12:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:30.255 12:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.255 12:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:30.255 12:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:30.255 12:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:30.255 12:32:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2332990 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 2332990 ']' 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 2332990 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2332990 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2332990' 00:31:32.158 killing process with pid 2332990 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 2332990 00:31:32.158 [2024-05-15 12:33:00.304880] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 2332990 00:31:32.158 00:31:32.158 real 0m14.635s 00:31:32.158 user 0m57.753s 00:31:32.158 sys 0m2.823s 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:32.158 ************************************ 00:31:32.158 END TEST spdk_target_abort 00:31:32.158 ************************************ 00:31:32.158 12:33:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:32.158 12:33:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:32.158 12:33:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:32.158 12:33:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:32.158 ************************************ 00:31:32.158 START TEST kernel_target_abort 00:31:32.158 ************************************ 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:32.158 12:33:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:35.445 Waiting for block devices as requested 00:31:35.445 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:35.445 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:35.705 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:35.705 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:35.705 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:35.963 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:35.963 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:35.963 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:35.963 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:36.223 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:36.223 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:36.223 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:36.481 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:36.481 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:36.481 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:36.739 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:36.739 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:36.998 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:36.998 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:36.998 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:36.999 No valid GPT data, bailing 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:31:36.999 00:31:36.999 Discovery Log Number of Records 2, Generation counter 2 00:31:36.999 =====Discovery Log Entry 0====== 00:31:36.999 trtype: tcp 00:31:36.999 adrfam: ipv4 00:31:36.999 subtype: current discovery subsystem 00:31:36.999 treq: not specified, sq flow control disable supported 00:31:36.999 portid: 1 00:31:36.999 trsvcid: 4420 00:31:36.999 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:36.999 traddr: 10.0.0.1 00:31:36.999 eflags: none 00:31:36.999 sectype: none 00:31:36.999 =====Discovery Log Entry 1====== 00:31:36.999 trtype: tcp 00:31:36.999 adrfam: ipv4 00:31:36.999 subtype: nvme subsystem 00:31:36.999 treq: not specified, sq flow control disable supported 00:31:36.999 portid: 1 00:31:36.999 trsvcid: 4420 00:31:36.999 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:36.999 traddr: 10.0.0.1 00:31:36.999 eflags: none 00:31:36.999 sectype: none 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:36.999 12:33:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:37.257 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.594 Initializing NVMe Controllers 00:31:40.594 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.594 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:40.594 Initialization complete. Launching workers. 00:31:40.594 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56018, failed: 0 00:31:40.594 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56018, failed to submit 0 00:31:40.594 success 0, unsuccess 56018, failed 0 00:31:40.594 12:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:40.594 12:33:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.594 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.874 Initializing NVMe Controllers 00:31:43.874 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.874 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.874 Initialization complete. Launching workers. 00:31:43.874 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103775, failed: 0 00:31:43.874 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26074, failed to submit 77701 00:31:43.874 success 0, unsuccess 26074, failed 0 00:31:43.874 12:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.874 12:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.874 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.406 Initializing NVMe Controllers 00:31:46.406 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.406 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.406 Initialization complete. Launching workers. 00:31:46.406 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102106, failed: 0 00:31:46.406 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25526, failed to submit 76580 00:31:46.406 success 0, unsuccess 25526, failed 0 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:46.406 12:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:49.691 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:49.691 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:49.692 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:49.692 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:49.692 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:49.692 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:49.692 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:51.597 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:51.597 00:31:51.597 real 0m19.122s 00:31:51.597 user 0m6.763s 00:31:51.597 sys 0m6.311s 00:31:51.597 12:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:51.597 12:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.597 ************************************ 00:31:51.597 END TEST kernel_target_abort 00:31:51.597 ************************************ 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:51.597 rmmod nvme_tcp 00:31:51.597 rmmod nvme_fabrics 00:31:51.597 rmmod nvme_keyring 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2332990 ']' 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2332990 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 2332990 ']' 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 2332990 00:31:51.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2332990) - No such process 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 2332990 is not found' 00:31:51.597 Process with pid 2332990 is not found 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:51.597 12:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:54.881 Waiting for block devices as requested 00:31:54.881 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:54.881 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:54.881 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:54.881 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:54.881 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:55.139 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:55.139 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:55.139 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:55.139 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:55.397 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:55.397 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:55.397 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:55.655 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:55.655 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:55.655 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:55.913 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:55.913 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:56.170 12:33:24 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:56.170 12:33:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:56.170 12:33:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:56.170 12:33:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:56.170 12:33:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.170 12:33:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:56.170 12:33:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.069 12:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:58.069 00:31:58.069 real 0m53.946s 00:31:58.069 user 1m9.348s 00:31:58.069 sys 0m19.611s 00:31:58.069 12:33:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:58.069 12:33:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:58.069 ************************************ 00:31:58.069 END TEST nvmf_abort_qd_sizes 00:31:58.069 ************************************ 00:31:58.328 12:33:26 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:58.328 12:33:26 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:58.328 12:33:26 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:58.328 12:33:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.328 ************************************ 00:31:58.328 START TEST keyring_file 00:31:58.328 ************************************ 00:31:58.328 12:33:26 keyring_file -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:58.328 * Looking for test storage... 00:31:58.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:58.328 12:33:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:58.328 12:33:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.328 12:33:26 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.328 12:33:26 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.328 12:33:26 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.328 12:33:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.328 12:33:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.328 12:33:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.328 12:33:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:58.328 12:33:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:58.328 12:33:26 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:58.328 12:33:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:58.328 12:33:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:58.328 12:33:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:58.328 12:33:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:58.328 12:33:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:58.329 12:33:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:58.329 12:33:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:58.329 12:33:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:58.329 12:33:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:58.329 12:33:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:58.329 12:33:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:58.329 12:33:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:58.329 12:33:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.huBIWb3fRp 00:31:58.329 12:33:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:58.329 12:33:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:58.329 12:33:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:58.329 12:33:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:58.329 12:33:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:58.329 12:33:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:58.329 12:33:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:58.329 12:33:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.huBIWb3fRp 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.huBIWb3fRp 00:31:58.588 12:33:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.huBIWb3fRp 00:31:58.588 12:33:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.agR2xekUeZ 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:58.588 12:33:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:58.588 12:33:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:58.588 12:33:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:58.588 12:33:26 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:58.588 12:33:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:58.588 12:33:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.agR2xekUeZ 00:31:58.588 12:33:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.agR2xekUeZ 00:31:58.588 12:33:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.agR2xekUeZ 00:31:58.588 12:33:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=2343008 00:31:58.588 12:33:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:58.588 12:33:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2343008 00:31:58.588 12:33:26 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2343008 ']' 00:31:58.588 12:33:26 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.588 12:33:26 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:58.588 12:33:26 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.588 12:33:26 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:58.588 12:33:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.588 [2024-05-15 12:33:26.972460] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:31:58.588 [2024-05-15 12:33:26.972510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343008 ] 00:31:58.588 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.588 [2024-05-15 12:33:27.040604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.588 [2024-05-15 12:33:27.114273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:31:59.553 12:33:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:59.553 [2024-05-15 12:33:27.761184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.553 null0 00:31:59.553 [2024-05-15 12:33:27.793214] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:59.553 [2024-05-15 12:33:27.793263] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:59.553 [2024-05-15 12:33:27.793605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:59.553 [2024-05-15 12:33:27.801267] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.553 12:33:27 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:59.553 [2024-05-15 12:33:27.817306] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:59.553 request: 00:31:59.553 { 00:31:59.553 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:59.553 "secure_channel": false, 00:31:59.553 "listen_address": { 00:31:59.553 "trtype": "tcp", 00:31:59.553 "traddr": "127.0.0.1", 00:31:59.553 "trsvcid": "4420" 00:31:59.553 }, 00:31:59.553 "method": "nvmf_subsystem_add_listener", 00:31:59.553 "req_id": 1 00:31:59.553 } 00:31:59.553 Got JSON-RPC error response 00:31:59.553 response: 00:31:59.553 { 00:31:59.553 "code": -32602, 00:31:59.553 "message": "Invalid parameters" 00:31:59.553 } 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:59.553 12:33:27 keyring_file -- keyring/file.sh@46 -- # bperfpid=2343063 00:31:59.553 12:33:27 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2343063 /var/tmp/bperf.sock 00:31:59.553 12:33:27 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2343063 ']' 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:59.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:59.553 12:33:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:59.553 [2024-05-15 12:33:27.870388] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:31:59.553 [2024-05-15 12:33:27.870432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343063 ] 00:31:59.553 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.553 [2024-05-15 12:33:27.940084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.553 [2024-05-15 12:33:28.019400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.487 12:33:28 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:00.487 12:33:28 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:32:00.487 12:33:28 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.huBIWb3fRp 00:32:00.487 12:33:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.huBIWb3fRp 00:32:00.487 12:33:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.agR2xekUeZ 00:32:00.487 12:33:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.agR2xekUeZ 00:32:00.745 12:33:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:00.745 12:33:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:00.745 12:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.745 12:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.745 12:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.745 12:33:29 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.huBIWb3fRp == \/\t\m\p\/\t\m\p\.\h\u\B\I\W\b\3\f\R\p ]] 00:32:00.745 12:33:29 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:00.745 12:33:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:00.745 12:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.745 12:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.745 12:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:01.004 12:33:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.agR2xekUeZ == \/\t\m\p\/\t\m\p\.\a\g\R\2\x\e\k\U\e\Z ]] 00:32:01.004 12:33:29 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:01.004 12:33:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:01.004 12:33:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.004 12:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.004 12:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.004 12:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:01.263 12:33:29 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:01.263 12:33:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:01.263 12:33:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:01.263 12:33:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.263 12:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.263 12:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:01.263 12:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.263 12:33:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:01.263 12:33:29 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:01.263 12:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:01.523 [2024-05-15 12:33:29.897889] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:01.523 nvme0n1 00:32:01.523 12:33:29 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:01.523 12:33:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:01.523 12:33:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.523 12:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.523 12:33:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.523 12:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:01.782 12:33:30 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:01.782 12:33:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:01.782 12:33:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:01.782 12:33:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.782 12:33:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.782 12:33:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.782 12:33:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:02.041 12:33:30 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:02.041 12:33:30 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:02.041 Running I/O for 1 seconds... 00:32:02.978 00:32:02.978 Latency(us) 00:32:02.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.978 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:02.978 nvme0n1 : 1.01 9295.48 36.31 0.00 0.00 13685.33 4613.73 19503.51 00:32:02.978 =================================================================================================================== 00:32:02.978 Total : 9295.48 36.31 0.00 0.00 13685.33 4613.73 19503.51 00:32:02.978 0 00:32:02.978 12:33:31 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:02.978 12:33:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:03.237 12:33:31 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:03.237 12:33:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:03.237 12:33:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.237 12:33:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.237 12:33:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.237 12:33:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.496 12:33:31 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:03.496 12:33:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:03.496 12:33:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:03.496 12:33:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.496 12:33:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.496 12:33:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.496 12:33:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:03.496 12:33:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:03.496 12:33:31 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:03.496 12:33:31 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:03.496 12:33:31 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:03.496 12:33:31 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:03.496 12:33:31 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:03.496 12:33:31 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:03.496 12:33:31 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:03.496 12:33:31 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:03.496 12:33:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:03.755 [2024-05-15 12:33:32.144262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:03.755 [2024-05-15 12:33:32.144964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24270e0 (107): Transport endpoint is not connected 00:32:03.755 [2024-05-15 12:33:32.145954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24270e0 (9): Bad file descriptor 00:32:03.755 [2024-05-15 12:33:32.146954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:03.755 [2024-05-15 12:33:32.146967] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:03.755 [2024-05-15 12:33:32.146979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:03.755 request: 00:32:03.755 { 00:32:03.755 "name": "nvme0", 00:32:03.755 "trtype": "tcp", 00:32:03.755 "traddr": "127.0.0.1", 00:32:03.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:03.755 "adrfam": "ipv4", 00:32:03.755 "trsvcid": "4420", 00:32:03.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:03.755 "psk": "key1", 00:32:03.755 "method": "bdev_nvme_attach_controller", 00:32:03.755 "req_id": 1 00:32:03.755 } 00:32:03.755 Got JSON-RPC error response 00:32:03.755 response: 00:32:03.755 { 00:32:03.755 "code": -32602, 00:32:03.755 "message": "Invalid parameters" 00:32:03.755 } 00:32:03.755 12:33:32 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:03.755 12:33:32 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:03.755 12:33:32 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:03.755 12:33:32 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:03.755 12:33:32 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:03.755 12:33:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:03.755 12:33:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.755 12:33:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.755 12:33:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.755 12:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:04.014 12:33:32 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:04.014 12:33:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:04.014 12:33:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:04.014 12:33:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:04.014 12:33:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:04.014 12:33:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:04.014 12:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:04.014 12:33:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:04.014 12:33:32 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:04.014 12:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:04.273 12:33:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:04.273 12:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:04.531 12:33:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:04.531 12:33:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:04.531 12:33:32 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:04.531 12:33:33 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:04.531 12:33:33 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.huBIWb3fRp 00:32:04.531 12:33:33 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.huBIWb3fRp 00:32:04.531 12:33:33 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:04.531 12:33:33 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.huBIWb3fRp 00:32:04.531 12:33:33 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:04.531 12:33:33 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:04.531 12:33:33 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:04.531 12:33:33 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:04.531 12:33:33 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.huBIWb3fRp 00:32:04.531 12:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.huBIWb3fRp 00:32:04.799 [2024-05-15 12:33:33.179990] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.huBIWb3fRp': 0100660 00:32:04.799 [2024-05-15 12:33:33.180017] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:04.799 request: 00:32:04.799 { 00:32:04.799 "name": "key0", 00:32:04.799 "path": "/tmp/tmp.huBIWb3fRp", 00:32:04.799 "method": "keyring_file_add_key", 00:32:04.799 "req_id": 1 00:32:04.799 } 00:32:04.799 Got JSON-RPC error response 00:32:04.799 response: 00:32:04.799 { 00:32:04.799 "code": -1, 00:32:04.799 "message": "Operation not permitted" 00:32:04.799 } 00:32:04.799 12:33:33 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:04.799 12:33:33 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:04.799 12:33:33 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:04.799 12:33:33 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:04.799 12:33:33 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.huBIWb3fRp 00:32:04.799 12:33:33 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.huBIWb3fRp 00:32:04.799 12:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.huBIWb3fRp 00:32:05.058 12:33:33 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.huBIWb3fRp 00:32:05.058 12:33:33 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:05.058 12:33:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:05.058 12:33:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.058 12:33:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.058 12:33:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.058 12:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.058 12:33:33 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:05.058 12:33:33 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.058 12:33:33 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:05.058 12:33:33 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.058 12:33:33 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:05.058 12:33:33 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:05.058 12:33:33 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:05.058 12:33:33 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:05.058 12:33:33 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.058 12:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.318 [2024-05-15 12:33:33.713407] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.huBIWb3fRp': No such file or directory 00:32:05.318 [2024-05-15 12:33:33.713433] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:05.318 [2024-05-15 12:33:33.713457] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:05.318 [2024-05-15 12:33:33.713469] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:05.318 [2024-05-15 12:33:33.713480] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:05.318 request: 00:32:05.318 { 00:32:05.318 "name": "nvme0", 00:32:05.318 "trtype": "tcp", 00:32:05.318 "traddr": "127.0.0.1", 00:32:05.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:05.318 "adrfam": "ipv4", 00:32:05.318 "trsvcid": "4420", 00:32:05.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:05.318 "psk": "key0", 00:32:05.318 "method": "bdev_nvme_attach_controller", 00:32:05.318 "req_id": 1 00:32:05.318 } 00:32:05.318 Got JSON-RPC error response 00:32:05.318 response: 00:32:05.318 { 00:32:05.318 "code": -19, 00:32:05.318 "message": "No such device" 00:32:05.318 } 00:32:05.318 12:33:33 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:05.318 12:33:33 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:05.318 12:33:33 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:05.318 12:33:33 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:05.318 12:33:33 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:05.318 12:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:05.576 12:33:33 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dQR90BOlcm 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:05.576 12:33:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:05.576 12:33:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:05.576 12:33:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:05.576 12:33:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:05.576 12:33:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:05.576 12:33:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dQR90BOlcm 00:32:05.576 12:33:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dQR90BOlcm 00:32:05.577 12:33:33 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.dQR90BOlcm 00:32:05.577 12:33:33 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dQR90BOlcm 00:32:05.577 12:33:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dQR90BOlcm 00:32:05.835 12:33:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.835 12:33:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.094 nvme0n1 00:32:06.094 12:33:34 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:06.094 12:33:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:06.094 12:33:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:06.094 12:33:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:06.094 12:33:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.094 12:33:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:06.094 12:33:34 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:06.094 12:33:34 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:06.094 12:33:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:06.352 12:33:34 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:06.352 12:33:34 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:06.352 12:33:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:06.352 12:33:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:06.352 12:33:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.611 12:33:34 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:06.611 12:33:34 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:06.611 12:33:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:06.611 12:33:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:06.611 12:33:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:06.611 12:33:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:06.611 12:33:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.611 12:33:35 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:06.611 12:33:35 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:06.611 12:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:06.869 12:33:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:06.869 12:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.869 12:33:35 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:07.128 12:33:35 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:07.128 12:33:35 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dQR90BOlcm 00:32:07.128 12:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dQR90BOlcm 00:32:07.128 12:33:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.agR2xekUeZ 00:32:07.128 12:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.agR2xekUeZ 00:32:07.387 12:33:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:07.387 12:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:07.645 nvme0n1 00:32:07.645 12:33:36 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:07.645 12:33:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:07.904 12:33:36 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:07.904 "subsystems": [ 00:32:07.904 { 00:32:07.904 "subsystem": "keyring", 00:32:07.904 "config": [ 00:32:07.904 { 00:32:07.904 "method": "keyring_file_add_key", 00:32:07.904 "params": { 00:32:07.904 "name": "key0", 00:32:07.904 "path": "/tmp/tmp.dQR90BOlcm" 00:32:07.904 } 00:32:07.904 }, 00:32:07.904 { 00:32:07.904 "method": "keyring_file_add_key", 00:32:07.904 "params": { 00:32:07.904 "name": "key1", 00:32:07.904 "path": "/tmp/tmp.agR2xekUeZ" 00:32:07.904 } 00:32:07.904 } 00:32:07.904 ] 00:32:07.904 }, 00:32:07.904 { 00:32:07.904 "subsystem": "iobuf", 00:32:07.904 "config": [ 00:32:07.904 { 00:32:07.904 "method": "iobuf_set_options", 00:32:07.904 "params": { 00:32:07.904 "small_pool_count": 8192, 00:32:07.904 "large_pool_count": 1024, 00:32:07.904 "small_bufsize": 8192, 00:32:07.904 "large_bufsize": 135168 00:32:07.904 } 00:32:07.904 } 00:32:07.904 ] 00:32:07.904 }, 00:32:07.904 { 00:32:07.904 "subsystem": "sock", 00:32:07.904 "config": [ 00:32:07.904 { 00:32:07.904 "method": "sock_impl_set_options", 00:32:07.904 "params": { 00:32:07.904 "impl_name": "posix", 00:32:07.904 "recv_buf_size": 2097152, 00:32:07.904 "send_buf_size": 2097152, 00:32:07.904 "enable_recv_pipe": true, 00:32:07.904 "enable_quickack": false, 00:32:07.904 "enable_placement_id": 0, 00:32:07.904 "enable_zerocopy_send_server": true, 00:32:07.904 "enable_zerocopy_send_client": false, 00:32:07.904 "zerocopy_threshold": 0, 00:32:07.904 "tls_version": 0, 00:32:07.904 "enable_ktls": false 00:32:07.904 } 00:32:07.904 }, 00:32:07.904 { 00:32:07.904 "method": "sock_impl_set_options", 00:32:07.904 "params": { 00:32:07.904 "impl_name": "ssl", 00:32:07.904 "recv_buf_size": 4096, 00:32:07.904 "send_buf_size": 4096, 00:32:07.904 "enable_recv_pipe": true, 00:32:07.904 "enable_quickack": false, 00:32:07.904 "enable_placement_id": 0, 00:32:07.904 "enable_zerocopy_send_server": true, 00:32:07.904 "enable_zerocopy_send_client": false, 00:32:07.904 "zerocopy_threshold": 0, 00:32:07.904 "tls_version": 0, 00:32:07.904 "enable_ktls": false 00:32:07.904 } 00:32:07.904 } 00:32:07.904 ] 00:32:07.904 }, 00:32:07.904 { 00:32:07.904 "subsystem": "vmd", 00:32:07.904 "config": [] 00:32:07.904 }, 00:32:07.904 { 00:32:07.904 "subsystem": "accel", 00:32:07.904 "config": [ 00:32:07.904 { 00:32:07.904 "method": "accel_set_options", 00:32:07.904 "params": { 00:32:07.904 "small_cache_size": 128, 00:32:07.904 "large_cache_size": 16, 00:32:07.904 "task_count": 2048, 00:32:07.904 "sequence_count": 2048, 00:32:07.904 "buf_count": 2048 00:32:07.904 } 00:32:07.904 } 00:32:07.904 ] 00:32:07.905 }, 00:32:07.905 { 00:32:07.905 "subsystem": "bdev", 00:32:07.905 "config": [ 00:32:07.905 { 00:32:07.905 "method": "bdev_set_options", 00:32:07.905 "params": { 00:32:07.905 "bdev_io_pool_size": 65535, 00:32:07.905 "bdev_io_cache_size": 256, 00:32:07.905 "bdev_auto_examine": true, 00:32:07.905 "iobuf_small_cache_size": 128, 00:32:07.905 "iobuf_large_cache_size": 16 00:32:07.905 } 00:32:07.905 }, 00:32:07.905 { 00:32:07.905 "method": "bdev_raid_set_options", 00:32:07.905 "params": { 00:32:07.905 "process_window_size_kb": 1024 00:32:07.905 } 00:32:07.905 }, 00:32:07.905 { 00:32:07.905 "method": "bdev_iscsi_set_options", 00:32:07.905 "params": { 00:32:07.905 "timeout_sec": 30 00:32:07.905 } 00:32:07.905 }, 00:32:07.905 { 00:32:07.905 "method": "bdev_nvme_set_options", 00:32:07.905 "params": { 00:32:07.905 "action_on_timeout": "none", 00:32:07.905 "timeout_us": 0, 00:32:07.905 "timeout_admin_us": 0, 00:32:07.905 "keep_alive_timeout_ms": 10000, 00:32:07.905 "arbitration_burst": 0, 00:32:07.905 "low_priority_weight": 0, 00:32:07.905 "medium_priority_weight": 0, 00:32:07.905 "high_priority_weight": 0, 00:32:07.905 "nvme_adminq_poll_period_us": 10000, 00:32:07.905 "nvme_ioq_poll_period_us": 0, 00:32:07.905 "io_queue_requests": 512, 00:32:07.905 "delay_cmd_submit": true, 00:32:07.905 "transport_retry_count": 4, 00:32:07.905 "bdev_retry_count": 3, 00:32:07.905 "transport_ack_timeout": 0, 00:32:07.905 "ctrlr_loss_timeout_sec": 0, 00:32:07.905 "reconnect_delay_sec": 0, 00:32:07.905 "fast_io_fail_timeout_sec": 0, 00:32:07.905 "disable_auto_failback": false, 00:32:07.905 "generate_uuids": false, 00:32:07.905 "transport_tos": 0, 00:32:07.905 "nvme_error_stat": false, 00:32:07.905 "rdma_srq_size": 0, 00:32:07.905 "io_path_stat": false, 00:32:07.905 "allow_accel_sequence": false, 00:32:07.905 "rdma_max_cq_size": 0, 00:32:07.905 "rdma_cm_event_timeout_ms": 0, 00:32:07.905 "dhchap_digests": [ 00:32:07.905 "sha256", 00:32:07.905 "sha384", 00:32:07.905 "sha512" 00:32:07.905 ], 00:32:07.905 "dhchap_dhgroups": [ 00:32:07.905 "null", 00:32:07.905 "ffdhe2048", 00:32:07.905 "ffdhe3072", 00:32:07.905 "ffdhe4096", 00:32:07.905 "ffdhe6144", 00:32:07.905 "ffdhe8192" 00:32:07.905 ] 00:32:07.905 } 00:32:07.905 }, 00:32:07.905 { 00:32:07.905 "method": "bdev_nvme_attach_controller", 00:32:07.905 "params": { 00:32:07.905 "name": "nvme0", 00:32:07.905 "trtype": "TCP", 00:32:07.905 "adrfam": "IPv4", 00:32:07.905 "traddr": "127.0.0.1", 00:32:07.905 "trsvcid": "4420", 00:32:07.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:07.905 "prchk_reftag": false, 00:32:07.905 "prchk_guard": false, 00:32:07.905 "ctrlr_loss_timeout_sec": 0, 00:32:07.905 "reconnect_delay_sec": 0, 00:32:07.905 "fast_io_fail_timeout_sec": 0, 00:32:07.905 "psk": "key0", 00:32:07.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:07.905 "hdgst": false, 00:32:07.905 "ddgst": false 00:32:07.905 } 00:32:07.905 }, 00:32:07.905 { 00:32:07.905 "method": "bdev_nvme_set_hotplug", 00:32:07.905 "params": { 00:32:07.905 "period_us": 100000, 00:32:07.905 "enable": false 00:32:07.905 } 00:32:07.905 }, 00:32:07.905 { 00:32:07.905 "method": "bdev_wait_for_examine" 00:32:07.905 } 00:32:07.905 ] 00:32:07.905 }, 00:32:07.905 { 00:32:07.905 "subsystem": "nbd", 00:32:07.905 "config": [] 00:32:07.905 } 00:32:07.905 ] 00:32:07.905 }' 00:32:07.905 12:33:36 keyring_file -- keyring/file.sh@114 -- # killprocess 2343063 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2343063 ']' 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2343063 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@952 -- # uname 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2343063 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2343063' 00:32:07.905 killing process with pid 2343063 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@966 -- # kill 2343063 00:32:07.905 Received shutdown signal, test time was about 1.000000 seconds 00:32:07.905 00:32:07.905 Latency(us) 00:32:07.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.905 =================================================================================================================== 00:32:07.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.905 12:33:36 keyring_file -- common/autotest_common.sh@971 -- # wait 2343063 00:32:08.165 12:33:36 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:08.165 12:33:36 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:08.165 "subsystems": [ 00:32:08.165 { 00:32:08.165 "subsystem": "keyring", 00:32:08.165 "config": [ 00:32:08.165 { 00:32:08.165 "method": "keyring_file_add_key", 00:32:08.165 "params": { 00:32:08.165 "name": "key0", 00:32:08.165 "path": "/tmp/tmp.dQR90BOlcm" 00:32:08.165 } 00:32:08.165 }, 00:32:08.165 { 00:32:08.165 "method": "keyring_file_add_key", 00:32:08.165 "params": { 00:32:08.165 "name": "key1", 00:32:08.165 "path": "/tmp/tmp.agR2xekUeZ" 00:32:08.165 } 00:32:08.165 } 00:32:08.165 ] 00:32:08.165 }, 00:32:08.165 { 00:32:08.165 "subsystem": "iobuf", 00:32:08.165 "config": [ 00:32:08.165 { 00:32:08.165 "method": "iobuf_set_options", 00:32:08.165 "params": { 00:32:08.165 "small_pool_count": 8192, 00:32:08.165 "large_pool_count": 1024, 00:32:08.165 "small_bufsize": 8192, 00:32:08.165 "large_bufsize": 135168 00:32:08.165 } 00:32:08.165 } 00:32:08.165 ] 00:32:08.165 }, 00:32:08.165 { 00:32:08.165 "subsystem": "sock", 00:32:08.165 "config": [ 00:32:08.165 { 00:32:08.165 "method": "sock_impl_set_options", 00:32:08.165 "params": { 00:32:08.165 "impl_name": "posix", 00:32:08.165 "recv_buf_size": 2097152, 00:32:08.165 "send_buf_size": 2097152, 00:32:08.165 "enable_recv_pipe": true, 00:32:08.165 "enable_quickack": false, 00:32:08.165 "enable_placement_id": 0, 00:32:08.165 "enable_zerocopy_send_server": true, 00:32:08.165 "enable_zerocopy_send_client": false, 00:32:08.165 "zerocopy_threshold": 0, 00:32:08.165 "tls_version": 0, 00:32:08.165 "enable_ktls": false 00:32:08.165 } 00:32:08.165 }, 00:32:08.165 { 00:32:08.165 "method": "sock_impl_set_options", 00:32:08.165 "params": { 00:32:08.165 "impl_name": "ssl", 00:32:08.165 "recv_buf_size": 4096, 00:32:08.165 "send_buf_size": 4096, 00:32:08.165 "enable_recv_pipe": true, 00:32:08.165 "enable_quickack": false, 00:32:08.165 "enable_placement_id": 0, 00:32:08.165 "enable_zerocopy_send_server": true, 00:32:08.165 "enable_zerocopy_send_client": false, 00:32:08.165 "zerocopy_threshold": 0, 00:32:08.165 "tls_version": 0, 00:32:08.165 "enable_ktls": false 00:32:08.166 } 00:32:08.166 } 00:32:08.166 ] 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "subsystem": "vmd", 00:32:08.166 "config": [] 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "subsystem": "accel", 00:32:08.166 "config": [ 00:32:08.166 { 00:32:08.166 "method": "accel_set_options", 00:32:08.166 "params": { 00:32:08.166 "small_cache_size": 128, 00:32:08.166 "large_cache_size": 16, 00:32:08.166 "task_count": 2048, 00:32:08.166 "sequence_count": 2048, 00:32:08.166 "buf_count": 2048 00:32:08.166 } 00:32:08.166 } 00:32:08.166 ] 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "subsystem": "bdev", 00:32:08.166 "config": [ 00:32:08.166 { 00:32:08.166 "method": "bdev_set_options", 00:32:08.166 "params": { 00:32:08.166 "bdev_io_pool_size": 65535, 00:32:08.166 "bdev_io_cache_size": 256, 00:32:08.166 "bdev_auto_examine": true, 00:32:08.166 "iobuf_small_cache_size": 128, 00:32:08.166 "iobuf_large_cache_size": 16 00:32:08.166 } 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "method": "bdev_raid_set_options", 00:32:08.166 "params": { 00:32:08.166 "process_window_size_kb": 1024 00:32:08.166 } 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "method": "bdev_iscsi_set_options", 00:32:08.166 "params": { 00:32:08.166 "timeout_sec": 30 00:32:08.166 } 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "method": "bdev_nvme_set_options", 00:32:08.166 "params": { 00:32:08.166 "action_on_timeout": "none", 00:32:08.166 "timeout_us": 0, 00:32:08.166 "timeout_admin_us": 0, 00:32:08.166 "keep_alive_timeout_ms": 10000, 00:32:08.166 "arbitration_burst": 0, 00:32:08.166 "low_priority_weight": 0, 00:32:08.166 "medium_priority_weight": 0, 00:32:08.166 "high_priority_weight": 0, 00:32:08.166 "nvme_adminq_poll_period_us": 10000, 00:32:08.166 "nvme_ioq_poll_period_us": 0, 00:32:08.166 "io_queue_requests": 512, 00:32:08.166 "delay_cmd_submit": true, 00:32:08.166 "transport_retry_count": 4, 00:32:08.166 "bdev_retry_count": 3, 00:32:08.166 "transport_ack_timeout": 0, 00:32:08.166 "ctrlr_loss_timeout_sec": 0, 00:32:08.166 "reconnect_delay_sec": 0, 00:32:08.166 "fast_io_fail_timeout_sec": 0, 00:32:08.166 "disable_auto_failback": false, 00:32:08.166 "generate_uuids": false, 00:32:08.166 "transport_tos": 0, 00:32:08.166 "nvme_error_stat": false, 00:32:08.166 "rdma_srq_size": 0, 00:32:08.166 "io_path_stat": false, 00:32:08.166 "allow_accel_sequence": false, 00:32:08.166 "rdma_max_cq_size": 0, 00:32:08.166 "rdma_cm_event_timeout_ms": 0, 00:32:08.166 "dhchap_digests": [ 00:32:08.166 "sha256", 00:32:08.166 "sha384", 00:32:08.166 "sha512" 00:32:08.166 ], 00:32:08.166 "dhchap_dhgroups": [ 00:32:08.166 "null", 00:32:08.166 "ffdhe2048", 00:32:08.166 "ffdhe3072", 00:32:08.166 "ffdhe4096", 00:32:08.166 "ffdhe6144", 00:32:08.166 "ffdhe8192" 00:32:08.166 ] 00:32:08.166 } 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "method": "bdev_nvme_attach_controller", 00:32:08.166 "params": { 00:32:08.166 "name": "nvme0", 00:32:08.166 "trtype": "TCP", 00:32:08.166 "adrfam": "IPv4", 00:32:08.166 "traddr": "127.0.0.1", 00:32:08.166 "trsvcid": "4420", 00:32:08.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.166 "prchk_reftag": false, 00:32:08.166 "prchk_guard": false, 00:32:08.166 "ctrlr_loss_timeout_sec": 0, 00:32:08.166 "reconnect_delay_sec": 0, 00:32:08.166 "fast_io_fail_timeout_sec": 0, 00:32:08.166 "psk": "key0", 00:32:08.166 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:08.166 "hdgst": false, 00:32:08.166 "ddgst": false 00:32:08.166 } 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "method": "bdev_nvme_set_hotplug", 00:32:08.166 "params": { 00:32:08.166 "period_us": 100000, 00:32:08.166 "enable": false 00:32:08.166 } 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "method": "bdev_wait_for_examine" 00:32:08.166 } 00:32:08.166 ] 00:32:08.166 }, 00:32:08.166 { 00:32:08.166 "subsystem": "nbd", 00:32:08.166 "config": [] 00:32:08.166 } 00:32:08.166 ] 00:32:08.166 }' 00:32:08.166 12:33:36 keyring_file -- keyring/file.sh@117 -- # bperfpid=2344747 00:32:08.166 12:33:36 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2344747 /var/tmp/bperf.sock 00:32:08.166 12:33:36 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2344747 ']' 00:32:08.166 12:33:36 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:08.166 12:33:36 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:08.166 12:33:36 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:08.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:08.166 12:33:36 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:08.166 12:33:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:08.166 [2024-05-15 12:33:36.511261] Starting SPDK v24.05-pre git sha1 62bc4f069 / DPDK 23.11.0 initialization... 00:32:08.166 [2024-05-15 12:33:36.511320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344747 ] 00:32:08.166 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.166 [2024-05-15 12:33:36.576347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.166 [2024-05-15 12:33:36.651948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.425 [2024-05-15 12:33:36.802835] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:08.994 12:33:37 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:08.994 12:33:37 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:32:08.994 12:33:37 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:08.994 12:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.994 12:33:37 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:08.994 12:33:37 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:08.994 12:33:37 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:08.994 12:33:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:08.994 12:33:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:08.994 12:33:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:08.994 12:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.994 12:33:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:09.254 12:33:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:09.254 12:33:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:09.254 12:33:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:09.254 12:33:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:09.254 12:33:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.254 12:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.254 12:33:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:09.513 12:33:37 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:09.513 12:33:37 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:09.513 12:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:09.513 12:33:37 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:09.513 12:33:38 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:09.513 12:33:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:09.514 12:33:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dQR90BOlcm /tmp/tmp.agR2xekUeZ 00:32:09.514 12:33:38 keyring_file -- keyring/file.sh@20 -- # killprocess 2344747 00:32:09.514 12:33:38 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2344747 ']' 00:32:09.514 12:33:38 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2344747 00:32:09.514 12:33:38 keyring_file -- common/autotest_common.sh@952 -- # uname 00:32:09.514 12:33:38 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:09.514 12:33:38 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2344747 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2344747' 00:32:09.773 killing process with pid 2344747 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@966 -- # kill 2344747 00:32:09.773 Received shutdown signal, test time was about 1.000000 seconds 00:32:09.773 00:32:09.773 Latency(us) 00:32:09.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.773 =================================================================================================================== 00:32:09.773 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@971 -- # wait 2344747 00:32:09.773 12:33:38 keyring_file -- keyring/file.sh@21 -- # killprocess 2343008 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2343008 ']' 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2343008 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@952 -- # uname 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:09.773 12:33:38 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2343008 00:32:10.032 12:33:38 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:32:10.032 12:33:38 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:32:10.032 12:33:38 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2343008' 00:32:10.032 killing process with pid 2343008 00:32:10.032 12:33:38 keyring_file -- common/autotest_common.sh@966 -- # kill 2343008 00:32:10.032 [2024-05-15 12:33:38.311564] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:10.032 [2024-05-15 12:33:38.311597] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:10.032 12:33:38 keyring_file -- common/autotest_common.sh@971 -- # wait 2343008 00:32:10.292 00:32:10.292 real 0m11.986s 00:32:10.292 user 0m27.253s 00:32:10.292 sys 0m3.344s 00:32:10.293 12:33:38 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:10.293 12:33:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:10.293 ************************************ 00:32:10.293 END TEST keyring_file 00:32:10.293 ************************************ 00:32:10.293 12:33:38 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:32:10.293 12:33:38 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:10.293 12:33:38 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:32:10.293 12:33:38 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:10.293 12:33:38 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:10.293 12:33:38 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:10.293 12:33:38 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:32:10.293 12:33:38 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:32:10.293 12:33:38 -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:10.293 12:33:38 -- common/autotest_common.sh@10 -- # set +x 00:32:10.293 12:33:38 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:32:10.293 12:33:38 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:32:10.293 12:33:38 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:32:10.293 12:33:38 -- common/autotest_common.sh@10 -- # set +x 00:32:16.906 INFO: APP EXITING 00:32:16.906 INFO: killing all VMs 00:32:16.906 INFO: killing vhost app 00:32:16.906 INFO: EXIT DONE 00:32:19.443 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:19.443 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:19.443 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:19.443 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:19.443 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:19.443 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:19.443 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:19.443 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:19.443 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:19.444 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:19.444 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:19.703 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:19.703 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:19.703 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:19.703 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:19.703 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:19.703 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:32:22.996 Cleaning 00:32:22.996 Removing: /var/run/dpdk/spdk0/config 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:22.996 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:22.996 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:22.996 Removing: /var/run/dpdk/spdk1/config 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:22.996 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:22.996 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:22.996 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:22.996 Removing: /var/run/dpdk/spdk2/config 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:22.996 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:22.996 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:22.996 Removing: /var/run/dpdk/spdk3/config 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:22.996 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:22.996 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:22.996 Removing: /var/run/dpdk/spdk4/config 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:22.996 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:22.996 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:22.996 Removing: /dev/shm/bdev_svc_trace.1 00:32:22.996 Removing: /dev/shm/nvmf_trace.0 00:32:22.996 Removing: /dev/shm/spdk_tgt_trace.pid1936243 00:32:22.996 Removing: /var/run/dpdk/spdk0 00:32:22.996 Removing: /var/run/dpdk/spdk1 00:32:22.996 Removing: /var/run/dpdk/spdk2 00:32:22.996 Removing: /var/run/dpdk/spdk3 00:32:22.996 Removing: /var/run/dpdk/spdk4 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1933258 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1935023 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1936243 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1936945 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1937925 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1938127 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1939169 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1939430 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1939598 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1941290 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1942735 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1943046 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1943372 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1943706 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1944039 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1944326 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1944608 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1944920 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1945856 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1948934 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1949366 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1949765 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1949807 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1950374 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1950632 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1951043 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1951215 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1951515 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1951773 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1951908 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1952083 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1952656 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1952855 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1953151 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1953426 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1953647 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1953718 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1953999 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1954289 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1954576 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1954861 00:32:22.996 Removing: /var/run/dpdk/spdk_pid1955142 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1955429 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1955716 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1955996 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1956252 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1956479 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1956721 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1956947 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1957182 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1957466 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1957747 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1958032 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1958319 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1958605 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1958893 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1959176 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1959295 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1959755 00:32:23.256 Removing: /var/run/dpdk/spdk_pid1963689 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2011464 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2015988 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2027158 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2032756 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2037268 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2037960 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2050050 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2050105 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2051108 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2051910 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2052919 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2053511 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2053612 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2053852 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2054040 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2054052 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2054878 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2055901 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2056709 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2057258 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2057391 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2057660 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2058932 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2060056 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2069346 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2069633 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2074169 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2080298 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2083027 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2094005 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2103323 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2105159 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2106214 00:32:23.256 Removing: /var/run/dpdk/spdk_pid2124524 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2128751 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2153099 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2158478 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2160075 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2162047 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2162210 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2162489 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2162755 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2163337 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2165348 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2166334 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2166905 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2169272 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2169902 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2170748 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2175045 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2185788 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2189923 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2196454 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2198388 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2200006 00:32:23.515 Removing: /var/run/dpdk/spdk_pid2204587 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2208967 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2216857 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2216939 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2221885 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2221998 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2222192 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2222710 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2222726 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2227502 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2228163 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2232801 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2235649 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2241317 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2247066 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2256497 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2263848 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2263877 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2283042 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2283827 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2284381 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2285178 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2286043 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2286827 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2287394 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2287999 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2292471 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2292746 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2299656 00:32:23.516 Removing: /var/run/dpdk/spdk_pid2299913 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2302247 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2310308 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2310372 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2315855 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2317863 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2319874 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2321074 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2323092 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2324314 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2333748 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2334280 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2334807 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2337703 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2338365 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2338900 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2343008 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2343063 00:32:23.775 Removing: /var/run/dpdk/spdk_pid2344747 00:32:23.775 Clean 00:32:23.775 12:33:52 -- common/autotest_common.sh@1448 -- # return 0 00:32:23.775 12:33:52 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:32:23.775 12:33:52 -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:23.775 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:32:23.775 12:33:52 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:32:23.775 12:33:52 -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:23.775 12:33:52 -- common/autotest_common.sh@10 -- # set +x 00:32:24.035 12:33:52 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:24.035 12:33:52 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:24.035 12:33:52 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:24.035 12:33:52 -- spdk/autotest.sh@387 -- # hash lcov 00:32:24.035 12:33:52 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:24.035 12:33:52 -- spdk/autotest.sh@389 -- # hostname 00:32:24.035 12:33:52 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:24.035 geninfo: WARNING: invalid characters removed from testname! 00:32:45.976 12:34:12 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:46.545 12:34:14 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:48.453 12:34:16 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:49.832 12:34:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:51.801 12:34:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:53.178 12:34:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:55.084 12:34:23 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:55.084 12:34:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.084 12:34:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:55.084 12:34:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.084 12:34:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.084 12:34:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.084 12:34:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.084 12:34:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.084 12:34:23 -- paths/export.sh@5 -- $ export PATH 00:32:55.084 12:34:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.084 12:34:23 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:55.084 12:34:23 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:55.084 12:34:23 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715769263.XXXXXX 00:32:55.084 12:34:23 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715769263.PHAA4S 00:32:55.084 12:34:23 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:55.084 12:34:23 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:32:55.084 12:34:23 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:55.084 12:34:23 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:55.084 12:34:23 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:55.084 12:34:23 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:55.084 12:34:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:55.084 12:34:23 -- common/autotest_common.sh@10 -- $ set +x 00:32:55.084 12:34:23 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:55.084 12:34:23 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:55.084 12:34:23 -- pm/common@17 -- $ local monitor 00:32:55.084 12:34:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.084 12:34:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.084 12:34:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.084 12:34:23 -- pm/common@21 -- $ date +%s 00:32:55.084 12:34:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.084 12:34:23 -- pm/common@21 -- $ date +%s 00:32:55.084 12:34:23 -- pm/common@25 -- $ sleep 1 00:32:55.084 12:34:23 -- pm/common@21 -- $ date +%s 00:32:55.084 12:34:23 -- pm/common@21 -- $ date +%s 00:32:55.084 12:34:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715769263 00:32:55.084 12:34:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715769263 00:32:55.084 12:34:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715769263 00:32:55.084 12:34:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715769263 00:32:55.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715769263_collect-vmstat.pm.log 00:32:55.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715769263_collect-cpu-load.pm.log 00:32:55.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715769263_collect-cpu-temp.pm.log 00:32:55.084 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715769263_collect-bmc-pm.bmc.pm.log 00:32:56.022 12:34:24 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:56.022 12:34:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:32:56.022 12:34:24 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:56.022 12:34:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:56.022 12:34:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:56.022 12:34:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:56.022 12:34:24 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:56.022 12:34:24 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:56.022 12:34:24 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:56.022 12:34:24 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:56.022 12:34:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:56.022 12:34:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:56.022 12:34:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:56.022 12:34:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.022 12:34:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:56.022 12:34:24 -- pm/common@44 -- $ pid=2358411 00:32:56.022 12:34:24 -- pm/common@50 -- $ kill -TERM 2358411 00:32:56.022 12:34:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.022 12:34:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:56.022 12:34:24 -- pm/common@44 -- $ pid=2358413 00:32:56.022 12:34:24 -- pm/common@50 -- $ kill -TERM 2358413 00:32:56.022 12:34:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.022 12:34:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:56.022 12:34:24 -- pm/common@44 -- $ pid=2358415 00:32:56.022 12:34:24 -- pm/common@50 -- $ kill -TERM 2358415 00:32:56.022 12:34:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.022 12:34:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:56.022 12:34:24 -- pm/common@44 -- $ pid=2358443 00:32:56.022 12:34:24 -- pm/common@50 -- $ sudo -E kill -TERM 2358443 00:32:56.022 + [[ -n 1823330 ]] 00:32:56.022 + sudo kill 1823330 00:32:56.032 [Pipeline] } 00:32:56.050 [Pipeline] // stage 00:32:56.055 [Pipeline] } 00:32:56.072 [Pipeline] // timeout 00:32:56.077 [Pipeline] } 00:32:56.093 [Pipeline] // catchError 00:32:56.099 [Pipeline] } 00:32:56.116 [Pipeline] // wrap 00:32:56.122 [Pipeline] } 00:32:56.137 [Pipeline] // catchError 00:32:56.145 [Pipeline] stage 00:32:56.147 [Pipeline] { (Epilogue) 00:32:56.162 [Pipeline] catchError 00:32:56.164 [Pipeline] { 00:32:56.179 [Pipeline] echo 00:32:56.180 Cleanup processes 00:32:56.186 [Pipeline] sh 00:32:56.471 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:56.472 2358513 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:56.472 2358860 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:56.495 [Pipeline] sh 00:32:56.780 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:56.780 ++ grep -v 'sudo pgrep' 00:32:56.780 ++ awk '{print $1}' 00:32:56.780 + sudo kill -9 2358513 00:32:56.793 [Pipeline] sh 00:32:57.077 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:57.077 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:02.346 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:05.667 [Pipeline] sh 00:33:05.949 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:05.949 Artifacts sizes are good 00:33:05.964 [Pipeline] archiveArtifacts 00:33:05.971 Archiving artifacts 00:33:06.122 [Pipeline] sh 00:33:06.406 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:06.421 [Pipeline] cleanWs 00:33:06.432 [WS-CLEANUP] Deleting project workspace... 00:33:06.432 [WS-CLEANUP] Deferred wipeout is used... 00:33:06.438 [WS-CLEANUP] done 00:33:06.440 [Pipeline] } 00:33:06.460 [Pipeline] // catchError 00:33:06.472 [Pipeline] sh 00:33:06.755 + logger -p user.info -t JENKINS-CI 00:33:06.764 [Pipeline] } 00:33:06.781 [Pipeline] // stage 00:33:06.787 [Pipeline] } 00:33:06.803 [Pipeline] // node 00:33:06.808 [Pipeline] End of Pipeline 00:33:06.834 Finished: SUCCESS